Upgrading vROPS (vRealize Operations Manager) to 8.6.3 via vRSLCM

In this post i will go over upgrading my 8.x vROPS appliance to 8.6.3 using vRSLCM (vRealize Suite Lifecycle Manager). As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.8 Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. Make sure we select the upgrade package not the install package. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes vROPS

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.6.3 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

Run the Upgrade Assessment tool to make sure the currently used dashboards, reports, metrics etc are still compatible with the new version

Once the report has finished running we can either Download or view the report. Once everything has been reviewing we can click on the I have viewed the report and agree to proceed box and click next to proceed to the next step.

A new feature that was added was the capability to automatically create a snapshot prior to the upgrade and remove it after the upgrade. On this screen we also have the ability to chose if we want to keep the snapshots post upgrade for validation testing for example. Click next

Run the Precheck to make sure there are no errors or issues.

Once the check is complete we can review the checks that were performed and we can continue by clicking Next.

Review the upgrade details and click on Next and the Submit. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.6.3

Since we are doing a major upgrade i strongly recommend to clean the cache before using the new vROPS version.

Upgrading vRLI (vRealize Log Insight) to 8.6.2 via vRSLCM

In this post i will go over upgrading my 8.x vRLI appliance to 8.6.2 using vRSLCM (vRealize Suite Lifecycle Manager). As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.6.2 or 8.7. Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. Make sure we select the upgrade package not the install package. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes vRLI

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.8 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

A new feature that was added was the capability to automatically create a snapshot prior to the upgrade and remove it after the upgrade. On this screen we also have the ability to chose if we want to keep the snapshots post upgrade for validation testing for example. Click next

Run the Precheck to make sure there are no errors or issues.

Once the check is complete we can review the checks that were performed and we can continue by clicking Next.

Review the upgrade details and click on Next then Finish. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.8

Since we are doing a major upgrade i strongly recommend to clean the cache before using the new vRLI version.

Upgrading vRA (vRealize Automation) to 8.8

In this post i will go over upgrading my 8.x vRA appliance to 8.8. As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.78 Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes vRA

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.8.0 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

Anew feature that was added was the capability to automatically create a snapshot prior to the upgrade and remove it after the upgrade. On this screen we also have the ability to chose if we want to keep the snapshots post upgrade for validation testing for example. Click next

Run the Precheck to make sure there are no errors

The next screen remind us of hardware requirements for vRA and vIDM which can be checked here for vRA and vIDM which can be checked here. As far as i can tell they haven’t changed since the 8.3 release. Check the I took care of the manual steps above and am ready to proceed check mark and click on Run Precheck

Once the check is complete we can review the checks that were performed and we can continue by clicking Next.

Review the upgrade details and click on Next. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.8

Since we are doing a major upgrade i strongly recommend to clean the cache before using the new vRA version.

SSC 8.8 urllib3 (1.25.11) or chardet (4.0.0) doesn’t match a supported version

I recently upgraded my LCM deployed SSC server to 8.8. If you need a guide to go through the upgrade you can find my other post here.

After the upgrade was completed i was noticing strange behavior in the SSC UI so i checked the status the of the services. Here are the errors i found and how i fixed them

The first step was to check the status of the service

systemctl status salt-master

The return was this

* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-05-16 03:32:35 UTC; 6s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 4577 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 335.0M
   CGroup: /system.slice/salt-master.service
           |-4577 /bin/python3 /usr/bin/salt-master
           |-4581 /bin/python3 /usr/bin/salt-master
           |-4589 /bin/python3 /usr/bin/salt-master
           |-4593 /bin/python3 /usr/bin/salt-master
           |-4602 /bin/python3 /usr/bin/salt-master
           |-4606 /bin/python3 /usr/bin/salt-master
           |-4608 /bin/python3 /usr/bin/salt-master
           |-4609 /bin/python3 /usr/bin/salt-master
           |-4616 /bin/python3 /usr/bin/salt-master
           |-4697 /bin/python3 /usr/bin/salt-master
           |-4699 /bin/python3 /usr/bin/salt-master
           |-4703 /bin/python3 /usr/bin/salt-master
           |-4711 /bin/python3 /usr/bin/salt-master
           |-4712 /bin/python3 /usr/bin/salt-master
           |-4713 /bin/python3 /usr/bin/salt-master
           |-4714 /bin/python3 /usr/bin/salt-master
           |-4715 /bin/python3 /usr/bin/salt-master
           `-4717 /bin/python3 /usr/bin/salt-master

May 16 03:32:34 ssc-01a.corp.local systemd[1]: Starting The Salt Master Server...
May 16 03:32:35 ssc-01a.corp.local salt-master[4577]: [WARNING ] /usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.11) or chardet (4.0.0) doesn't match a supported version!
May 16 03:32:35 ssc-01a.corp.local salt-master[4577]:   RequestsDependencyWarning)
May 16 03:32:35 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.

The way i got around the error was by running

pip3 install --upgrade requests

Alternatively the official documentation is here talks about extracting the .whl file from the my vmware customer connect portal here. The file we are looking for is vRA_SaltStack_Config-8.8.0.7-1_Installer.tar.gz

Once Downloaded we are looking for SSEAPE-8.8.0.7-py2.py3-none-any.whl found under sse-installer/salt/sse/eapi_plugin/files

The file needs to be uploaded on the node having the issue and we would run

sudo pip3 install SSEAPE-8.8.0.7-py2.py3-none-any.whl --prefix /usr 

Finally we can restart the saltstack service and verify that its running without errors:

* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-05-15 20:02:56 UTC; 51s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 31309 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 330.0M
   CGroup: /system.slice/salt-master.service
           |-31309 /bin/python3 /usr/bin/salt-master
           |-31315 /bin/python3 /usr/bin/salt-master
           |-31320 /bin/python3 /usr/bin/salt-master
           |-31323 /bin/python3 /usr/bin/salt-master
           |-31325 /bin/python3 /usr/bin/salt-master
           |-31326 /bin/python3 /usr/bin/salt-master
           |-31327 /bin/python3 /usr/bin/salt-master
           |-31328 /bin/python3 /usr/bin/salt-master
           |-31330 /bin/python3 /usr/bin/salt-master
           |-31397 /bin/python3 /usr/bin/salt-master
           |-31398 /bin/python3 /usr/bin/salt-master
           |-31400 /bin/python3 /usr/bin/salt-master
           |-31411 /bin/python3 /usr/bin/salt-master
           |-31412 /bin/python3 /usr/bin/salt-master
           |-31413 /bin/python3 /usr/bin/salt-master
           |-31414 /bin/python3 /usr/bin/salt-master
           |-31415 /bin/python3 /usr/bin/salt-master
           `-31416 /bin/python3 /usr/bin/salt-master

May 15 20:02:54 ssc-01a.corp.local systemd[1]: Starting The Salt Master Server...
May 15 20:02:56 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.

If the status page a return similar to this

sseapi_rpc_queue: could not connect to SSE server

Follow my other guide here

SSC 8.8 sseapi_rpc_queue: could not connect to SSE server

I recently upgraded my LCM deployed SSC server to 8.8. If you need a guide to go through the upgrade you can find my other post here.

After the upgrade was completed i was noticing strange behavior in the SSC UI so i checked the status the of the services. Here are the errors i found and how i fixed them

The first step was to check the status of the service

systemctl status salt-master

The return was this

[email protected] [ ~ ]# systemctl status salt-master
* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-05-15 16:39:26 UTC; 8min ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 3035 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 357.0M
   CGroup: /system.slice/salt-master.service
           |-3035 /bin/python3 /usr/bin/salt-master
           |-3041 /bin/python3 /usr/bin/salt-master
           |-3110 /bin/python3 /usr/bin/salt-master
           |-3115 /bin/python3 /usr/bin/salt-master
           |-3119 /bin/python3 /usr/bin/salt-master
           |-3122 /bin/python3 /usr/bin/salt-master
           |-3123 /bin/python3 /usr/bin/salt-master
           |-3124 /bin/python3 /usr/bin/salt-master
           |-3125 /bin/python3 /usr/bin/salt-master
           |-3203 /bin/python3 /usr/bin/salt-master
           |-3204 /bin/python3 /usr/bin/salt-master
           |-3206 /bin/python3 /usr/bin/salt-master
           |-3214 /bin/python3 /usr/bin/salt-master
           |-3216 /bin/python3 /usr/bin/salt-master
           |-3219 /bin/python3 /usr/bin/salt-master
           |-3220 /bin/python3 /usr/bin/salt-master
           |-3221 /bin/python3 /usr/bin/salt-master
           `-4871 /bin/python3 /usr/bin/salt-master

May 15 16:39:26 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.
May 15 16:39:27 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_rpc_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:27 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_event_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:27 ssc-01a.corp.local salt-master[3035]: [ERROR   ] Failed to get the salt environments: [Errno 111] Connection refused
May 15 16:39:28 ssc-01a.corp.local salt-master[3035]: [ERROR   ] Failed to retrieve commands from SSE: [Errno 111] Connection refused
May 15 16:39:32 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_rpc_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:32 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_event_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:37 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_rpc_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:38 ssc-01a.corp.local salt-master[3035]: [ERROR   ] Failed to retrieve commands from SSE: [Errno 111] Connection refused
May 15 16:39:38 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_event_queue: could not connect to SSE server: [Errno 111] Connection refused

The way i got around the error was by editing the /etc/salt/master.d/raas.conf. The file seems to be missing a key auth parameter. The engines section should look like this

engines:
  - sseapi: {}
  - eventqueue: {}
  - rpcqueue: {}
  - jobcompletion: {}
  - keyauth: {}

After restarting the salt master i was able to verify that the error was gone. To restart the service i ran

systemctl restart salt-master

To verify the status i ran

systemctl restart salt-master
* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-05-15 20:02:56 UTC; 51s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 31309 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 330.0M
   CGroup: /system.slice/salt-master.service
           |-31309 /bin/python3 /usr/bin/salt-master
           |-31315 /bin/python3 /usr/bin/salt-master
           |-31320 /bin/python3 /usr/bin/salt-master
           |-31323 /bin/python3 /usr/bin/salt-master
           |-31325 /bin/python3 /usr/bin/salt-master
           |-31326 /bin/python3 /usr/bin/salt-master
           |-31327 /bin/python3 /usr/bin/salt-master
           |-31328 /bin/python3 /usr/bin/salt-master
           |-31330 /bin/python3 /usr/bin/salt-master
           |-31397 /bin/python3 /usr/bin/salt-master
           |-31398 /bin/python3 /usr/bin/salt-master
           |-31400 /bin/python3 /usr/bin/salt-master
           |-31411 /bin/python3 /usr/bin/salt-master
           |-31412 /bin/python3 /usr/bin/salt-master
           |-31413 /bin/python3 /usr/bin/salt-master
           |-31414 /bin/python3 /usr/bin/salt-master
           |-31415 /bin/python3 /usr/bin/salt-master
           `-31416 /bin/python3 /usr/bin/salt-master

May 15 20:02:54 ssc-01a.corp.local systemd[1]: Starting The Salt Master Server...
May 15 20:02:56 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.

VMware documentation also talks about the procedure above in the Upgrade the Master Plugin documentation found here

If the status page a return similar to this

 [py.warnings      :110 ][WARNING ][5488] /usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.11) or chardet (4.0.0) doesn't match a supported version!
  RequestsDependencyWarning)

Follow my other guide here

Upgrading SSC (SaltStack Config) to 8.8

In this post i will go over upgrading my 8.x SSC appliance to 8.8. As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.8. Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes SSC

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.8.0 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

We can automatically create and delete a snapshot part of the upgrade process

Run the Precheck to make sure there are no errors

Once the check is complete, click on Next. Review the upgrade details and click on Next. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.8

Upgrading vRSLCM (vRealize Lifecycle Manager) to 8.8

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance upgraded to the latest 8.8 release. The release notes can be found here

The first step is to lo in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> System Upgrade

Click on Check for Upgrade

We can see that the check found a new version available for 8.8

Click on Upgrade

Verify that a snapshot or backup exists in case the process fails. Check the check mark for I took a snapshot of the vRealize Suite Lifecycle Manager before I performed this operation. Click Next

Click on Run Precheck

Verify that all check have passed and click on upgrade

This will fire up the upgrade process and start upgrading packages. The system will automatically reboot on 8.8 once completed. We can check the version by going to Settings -> System Details

If you get the below error clear the browser cache and try again

vRSLCM 8.x change [email protected] password via API

I recently had an use case where i wanted to change the [email protected] LCM password via an API call in order to automate the password rotation.

If you need a guide to get started you can find my other blog here

To change the password we can use postman PUT call to https://$vRLCM/lcm/authzn/api/v2/users/password

Don`t forget to include the new password under the body field formatted in JSON format ex:

We can also leverage curl using:

curl -k --location --request PUT 'https://$vRSLCM/lcm/authzn/api/v2/users/password' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic Token' \
--data-raw '{
  "password": "new_password",
  "username": "[email protected]"
}'

Don`t forget to replace the token with a properly encoded base64 token. Instructions are found on my other blog here

vRSLCM 8 API getting started

I`ve been having a hard time finding one article that covers the vRSLCM (vRealize Suite Lifecycle Manager) API. The official documentation can be found here

As we can see we can leverage the swagger UI by going to https://$vRLCM/api/swagger-ui.html but… i wanted to leverage curl from the cli or postman and as per best practices i wanted to generate a Bearer token.

First step was to authenticate using the credentials. We can do so in Postman by completing the Authorization fields using basic auth and running an POST against https://$vRLCM/lcm/authzn/api/login Example:

If we want to run it via curl we need to generate the credentials via a base64 encoded format. Luckily there is an easy converter at https://www.base64encode.org/ The format should be username:password. Ex:

Now that we have the encoded version we can leverage a simple curl command. In my case i also added a -k at the end to ignore the invalid SSL certificate

curl --location --request POST 'https://$vRLCM/lcm/authzn/api/login' \
--header 'Authorization: Basic YWRtaW5AbG9jYWw6cGFzc3dvcmQ=' -k

If correct the command will return a simple Login succeessfully message.

Now we can use the Authorization to query different things like checking the health. Looking at the swagger UI we can see that we require a get to /lcm/health/api/vs2/status

Example in Postman:

or via curl:

curl -X GET "https://$vRLCM/lcm/health/api/v2/status" -H  "accept: application/json"  --header 'Authorization: Basic YWRtaW5AbG9jYWw6cGFzc3dvcmQ=' -k

Installing an vROPS management pack via vRSLCM

This post we will be going over installing an vROPS management pack via vRSLCM. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Once logged on to vRSLCM click on the marketplace.

Alternatively we can also navigate to the marketplace by navigating to it via the side menu

In my case i want to install SDDC Health Monitoring Solution. Searching for sddc in the search box gives a number of results. In my case the latest version is 8.6.1

We can click directly on download or we can click on view details to show what the management pack provides

Once clicking on download we are presented with the EULA Agreement. Once reviewed we can click on next so we can enter some user information.

We can completed the required fields marked with an red asterisk and click on download to download the package.

Since the download was very small it completed relatively quickly in my environment. If we want to see the progress of the download we can navigate to Lifecycle Operations -> Requests. Once completed we can come back to the marketplace and we are presented with an install button. Click on install

Select which environment and datacenter we want to install the management pack in and lick on install.

We can view the progress by clicking on check request status on the bottom of the page

Once the installation reports as completed we can go to vROPS and verify that it was successfully installed

Navigating to the vROPS repository we can see that the management pack was successfully installed as well as configured