Browse Month

December 2021

Workaround instructions to address CVE-2021-44228 and CVE-2021-45046 in vRealize Operations 7.x

In this article i will go over one of the workaround instructions to address CVE-2021-44228 and CVE-2021-45046 in vRealize Operations 7.x. I have tested the workaround on vROPS 7.5 as its still shipped with VCF 3.x and i haven’t yet seen documentation on a workaround for this version. If you are looking for instructions for version 8.x consult kb article 87076. This has been tested on December 21 2021. Please check the official documentation or open a ticket for production usage.

Create a snapshot of the vROPS components to make sure we have something to revert to in case anything were to go wrong.

Log into the vROPS instance admin UI typically https://ip_address/admin and take the cluster offline. This applies to all nodes including but not limited to Analytic, Primary, Replica, Data, Remote Collectors and Witness nodes.

Give a reason and press ok

Verify the cluster is offline before continuing

Log in via ssh to a temporary path ex /tmp. Because vROPS 7.5 doesn`t come with the newer OpenSSL modules we need to find other means to get the files to the server without using a direct download method like wget.

In my case in put the code below in a file called vrops-log4j-fix.sh in my /tmp directory

#!/bin/bash

file=/tmp/impacted_jars.txt

echo "Searching for impacted .jar files. Please wait..."

find /usr/lib -type f -name "*.jar" -exec sh -c "zipinfo -1 '{}' | grep "org/apache/logging/log4j/core/lookup/JndiLookup.class" && echo {}" \; | grep "/usr/lib" > $file

line_qty=$(wc -l < $file)

if [ $line_qty -ne 0 ]; then
    echo "Found $line_qty impacted .jar files"
else
    echo "No impacted .jar files found"
    exit 0
fi

echo "Starting to patch impacted .jar files"

while IFS= read -r line;
do     
    echo "patching -> $line"

    own_user=$(stat -c '%U' "$line")
    own_group=$(stat -c '%G' "$line")

    zip -q -d "$line" org/apache/logging/log4j/core/lookup/JndiLookup.class
    if [ $? -ne 0 ]; then echo "ERROR: Fail to Patch $line"; fi

    chown $own_user:$own_group "$line"

done < $file

rm -f $file

Make the file executable by running chmod +x vrops-log4j-fix.sh

And then run the script by running ./vrops-log4j-fix.sh

The system will go through and find impacted .jar files and try to patch them. If successful we should end up with something like this

Next we will do the same with cp-log4j-fix.sh file

#!/bin/bash

#set -x


FAILURE="0"
WRAPPER_FILES="/usr/lib/vmware-vcops/user/conf/collector/wrapper.conf"
for f in $WRAPPER_FILES
    do
        last_idx=""
        if [[ -f $f ]]; then
            echo "********************************"
            echo "Updating file: $f"
            let last_idx=$(egrep "^wrapper.java.additional.[[:digit:]]+=" $f | cut -d= -f1 | awk -F '.' '{print $4}' | sort -n | tail -1)
            if [[ -z $last_idx ]]; then
                echo -e "ERROR: Failed to get JVM additional index"
                let FAILURE="1"
                continue
            fi
            ((last_idx++))
            echo -e "\n#Fixing Apache Log4j2 Remote Code Execution Vulnerability\nwrapper.java.additional.$last_idx=-Dlog4j2.formatMsgNoLookups=true" >> $f
            if [[ $? != 0 ]]; then
                echo -e "ERROR: Failed to update file: $f\n"
                let FAILURE="1"
            else 
                echo -e "Sucessfully updated file: $f\n"
            fi
        else
            echo -e "ERROR: file is not found: $f\n"
            let FAILURE="1"
        fi
    done


CASA_JVM="/usr/lib/vmware-casa/casa-webapp/bin/setenv.sh"
echo "********************************"
echo "Updating file: $CASA_JVM"
echo 'JAVA_OPTS="$JAVA_OPTS -Dlog4j2.formatMsgNoLookups=true"' >> $CASA_JVM
if [[ $? != 0 ]]; then
    echo -e "ERROR: Failed to update file: $CASA_JVM\n"
    let FAILURE="1"
else
    echo -e "Sucessfully updated file: $CASA_JVM\n"
fi

if [[ "X$FAILURE" == "X1" ]]; then
    exit 1
fi

exit 0

and lastly the data-rc-witness-log4j-fix.sh file

#!/bin/bash

#set -x


FAILURE="0"
WRAPPER_FILES="/usr/lib/vmware-vcops/user/conf/analytics/wrapper.conf
/usr/lib/vmware-vcops/user/conf/collector/wrapper.conf
/usr/lib/vmware-vcops/user/conf/gemfire/wrapper.conf
/usr/lib/vmware-vcops/user/conf/tomcat-enterprise/wrapper.conf"
for f in $WRAPPER_FILES
    do
        last_idx=""
        if [[ -f $f ]]; then
            echo "********************************"
            echo "Updating file: $f"
            let last_idx=$(egrep "^wrapper.java.additional.[[:digit:]]+=" $f | cut -d= -f1 | awk -F '.' '{print $4}' | sort -n | tail -1)
            if [[ -z $last_idx ]]; then
                echo -e "ERROR: Failed to get JVM additional index"
                let FAILURE="1"
                continue
            fi
            ((last_idx++))
            echo -e "\n#Fixing Apache Log4j2 Remote Code Execution Vulnerability\nwrapper.java.additional.$last_idx=-Dlog4j2.formatMsgNoLookups=true" >> $f
            if [[ $? != 0 ]]; then
                echo -e "ERROR: Failed to update file: $f\n"
                let FAILURE="1"
            else 
                echo -e "Sucessfully updated file: $f\n"
            fi
        else
            echo -e "ERROR: file is not found: $f\n"
            let FAILURE="1"
        fi
    done


CATALINA_FILES="/usr/lib/vmware-casa/casa-webapp/bin/setenv.sh
/usr/lib/vmware-vcops/tomcat-web-app/bin/setenv.sh"

for f in $CATALINA_FILES
    do
        if [[ -f $f ]]; then
            echo "********************************"
            echo "Updating file: $f"
            echo 'JAVA_OPTS="$JAVA_OPTS -Dlog4j2.formatMsgNoLookups=true"' >> $f
            if [[ $? != 0 ]]; then
                echo -e "ERROR: Failed to update file: $f\n"
                let FAILURE="1"
            else 
                echo -e "Sucessfully updated file: $f\n"
            fi
        else
            echo -e "ERROR: file is not found: $f\n"
            let FAILURE="1"
        fi
    done

if [[ "X$FAILURE" == "X1" ]]; then
    exit 1
fi

exit 0

It would look similar to this in the end

To verify that CVE-2021-44228 was applied run the following

ps axf | grep --color log4j2.formatMsgNoLookups | grep -v grep

Running ./vrops-log4j-fix.sh will also verify that there are no .jar files that need to be patched

Next bring the instance back online in the admin console

Changing vRO Kubernetes IP range

I recently ran through an routing issue where the Kubernetes IP range in vRO 8.6 was used somewhere else on the network. I didn’t want to redeploy the appliance so i went through the below to get it updated

First i identified what the ip range in use is by running

vracli network k8s-subnets

Lets check the status of the pods to make sure everything is running as it should. If any of the pods experience issues the change wont go through and it will cause additional issues

kubectl get pods -n prelude

The expectation is that under the Ready tab we have something similar

NAME                               READY   STATUS    RESTARTS   AGE
docker-registry-695f9b8b45-d8gqr   1/1     Running   0          53m
postgres-0                         1/1     Running   0          53m
proxy-service-5d8f64b54-lmxg5      1/1     Running   0          54m
vco-app-78499d8cbd-4mcnk           3/3     Running   0          54m

To set a new internal Kubernetes ip range i ran

vracli network k8s-subnets --cluster-cidr 192.168.0.0/22 --service-cidr 192.168.4.0/22

Then in order to apply i changes i ran

vracli upgrade exec

I was prompted with a question

The services will be shut down while upgrade is in progress. Confirm you want to continue with the upgrade operation.[Y/n]

By pressing Y the system went ahead and reconfigured\redeployed the pods on the proper network

And lastly i wanted to check the status of the pods to make sure they all came back

kubectl get pods -n prelude

vIDM 3.3.5 HA

In this guide we will go over the vIDM 3.3.5 HA configuration. The official documentation can be found here

Im going to assume that the load balancer configuration is already completed, the vIDM appliance has a the required certificate in the LCM inventory. Please read the official documentation for the full requirements.

We will be using the scale out feature in Lifecycle Manager. To do so we can navigate to Lifecycle Operations -> Environments -> globalenvironment -> View Details -> Click on Add Components

It is recommended that an inventory sync is performed prior to starting the process. It can be triggered by pressing on Trigger Inventory sync button. In my case i don`t need one as i did it earlier so ill just click Proceed

Network configuration should be populated. Verify the config and click next

Towards the bottom of the Components page there will be a components section. Click on the Plus sign next to it and select VMware Identity Manager Secondary Node. Perform this task task 2 times so we can have 3 vidm nodes.

Complete the required fields Like network configuration and Cluster Virtual IP

On the next page run the precheck in order to execute the data validation

Verify the Manual Validation as described in the Pop Up Window and click on Run Precheck

Once all the check are complete click on next, Verify the Summary and click on Submit

This will take us to the Request Details Page where we can follow the steps taken

Once the additional nodes are installed validate that everything is working as expected.

Upgrading vRSLCM (vRealize Lifecycle Manager) to 8.6.1

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance upgraded to the latest 8.6.1 release. The release notes can be found here

The first step is to lo in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> System Upgrade

Click on Check for Upgrade

We can see that the check found a new version available for 8.6.1.5

Click on Upgrade

Verify that a snapshot or backup exists in case the process fails. Check the check mark for I took a snapshot of the vRealize Suite Lifecycle Manager before I performed this operation. Click Next

Click on Run Precheck

Verify that all check have passed and click on upgrade

This will fire up the upgrade process and start upgrading packages. The system will automatically reboot on 8.6.1.5 once completed. We can check the version by going to Settings -> System Details

If you get the below error clear the browser cache and try again

vRSLCM (vRealize Lifecycle Manager) Product Support Pack

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance to support the latest product releases available. Here is a great blog that goes in to the details about what the Product Support Pack is https://blogs.vmware.com/management/2019/01/vrslcm-pspak.html. Typically the newer Product Support Pack is included part of the upgrade for LCM, however sometimes there are product releases in between releases where product support packs come in handy.

The first step is to log in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> Product Support Pack

We can see that i recently upgraded to 8.6 however a new update is available 8.6.0.1. Based on what we can see in the details the new support pack adds support for vRA 8.6.1. If an update is not available click on the Check Support Packs Online button and refresh the screen within a few minutes

Click on Apply Version

Verify that a snapshot or a backup exists and click Submit

We can view the progress by clicking on the Click Here link after submitting the request

Once the process is complete the system will most likely reboot. To check the status we can go back to settings -> Product Support Pack. As we can see we are now at the updated patch level

If you get the below error clear the browser cache and try again

Deploying vRA 8.6.1 using the Easy Installer

In this guide i will go over the deployment of vRA 8.6.1 using the Easy Installer. The reason i like the Easy Installer is because it deploys everything that i need. vIDM, Lifecycle Manager as well as vRA.

To get started we need to download the installer from here.

Once downloaded mount the iso as a drive using your favorite tool or extract it as a archive using something like 7-zip

Once mounted go to the CDROM in the mac directory if on mac or win32 if you are on windows. In here run the installer file

We will be presented with the following window:

The migration option allows us to migrate from from vRSLCM 2.1 to 8.x, it can migrate Datacenter and vCenters, all existing environments, DNS, SNMP, NTP, My VMware, proxy details, migration of vIDM installation as well as Blackstone Content endpoints. For the purpose of this guide we will process with the Install option

Once we click Install we will be presented with an Introduction of what components will be included part on the installation. In my case i can see vRealize Automation 8.6.1, Lifecycle Manager 8.6, Identity Manager 3.3.5.

Click Next, Review, accept the agreement and click next

Provide the vCenter details and click next. In my case i used the administrator account. A detailed list of permissions needed for deployment can be found here

Sections 4, 5, 6 and 7 are straight forward so i will skip them

In section 8 select a password that will be used across all products for the local usernames. ex for LCM root and local admin, vIDM admin, sshuser, root etc

Select a VM name for the Lifecycle Manager appliance and its ip credentials. If you are expecting a large repository in lifecycle manager we can add additional disk under the Increase Disk Size in GB section.

The FIPS Mode compliance option enforces FIPS compliance, however keep in mind that with the FIPS mode enabled there are limitation on what the product can do. This can be disabled later and re enabled as needed.

Complete the fields and click next

The next portion is vIDM. In case there is an existing vIDM appliance in the environment we can also import and existing vIDM appliance. In my case i will deploy a new one.

One important option under vIDM is the Sync Group Members to the Directory When Adding Group. When enabled, members of the groups are synced when groups are added from Active Directory. When this is disabled, group names are synced to the directory, but members of the group are not synced until the group is entitled to an application or the group name is added to an access policy. Note: Post deployment this value cannot be changed from vRealize Suite Lifecycle Manager. To update this field post deployment, navigate to VMware Identity Manager

My configuration page looks like this:

The next section is the vRA Configuration. In here we have a couple of options. We can perform a standard 1 node deployment or a cluster deployment which includes 3 appliances. The FIPS Compliance mode enables FIPS compliance. Unlike LCM this mode cannot be disabled after the deployment. This disables a number of options in vRA from an LCM perspective. Please make sure that its only enabled if required.

The advanced configuration at the bottom of the page includes the option to change the internal Kubernetes cluster and Service ip range. This is useful if the default range is already in use on the internal network. We want to make sure we pick an ip range thats not used somewhere else in order to avoid routing issues. Once complete click next

The next page gives us a summary of our deployment and we can click submit to start the deployment.

Next we are presented with the installation process. We can follow it along, in my environment the full deployment took about 1.5 hrs

After the install is complete we are presented to links for the different services

Next i would recommend LCM certificate management found here