Oracle Linux Virtualization Manager (OLVM), Hard Partitioning and SSO error with olvm-vmcontrol

Oracle Linux Virtualization Manager is one of the few remaining options for performing hard partitioning to restrict the use of Oracle licenses on a system.

In this case, a customer already had an (outdated) OLVM environment with three servers, each with two CPU licenses for the Oracle database, for a total of six CPU licenses, which corresponds to the use of twelve cores on the Intel/AMD environment. Due to hardware replacement cycles, the existing OLVM was not upgraded but reinstalled.

The environment in the new setting now consists of two nodes, each with 8 cores, for a total of 16, and an OLVM engine on an additional VM (in VMWare). In order to avoid having to add additional licenses for the Oracle database and since the company already has the necessary expertise in-house, the requirement was to limit the CPU cores or threads when moving the virtual machines (VMs) to the new OLVM environment. 

This is usually relatively easy to do by following the few steps in the document https://www.oracle.com/a/ocom/docs/linux/ol-kvm-hard-partitioning.pdf.

olvm-vmcontrol must be installed on the engine host, and then CPU pinning to the threads can be performed using a command with olvm-vmcontrol.

Unfortunately, however, the customer encountered an error when executing olvm-vmcontrol: Either it said that the user admin@internal could not connect due to incorrect credentials, or that admin@ovirt, which is actually used on the engine interface, did not exist at all:

'Error during SSO authentication “access_denied”: “Cannot authenticate user No valid profile found in credentials..”'

However, the problem was not an incorrect password, but rather that KeyCloak was suggested as the default during installation (never change default values, if unsure is not always a good advice). KeyCloak is still “experimental” and is responsible for preventing olvm-vmcontrol from connecting to the engine.

You can easily check whether KeyCloak is installed on the host running the engine:
grep KEYCLOAK_BUNDLED /etc/ovirt-engine/engine.conf.d/12-setup-keycloak.conf

If the value is “True,” you will definitely encounter the problem that olvm-vmcontrol cannot log in to the OLVM engine. Fortunately, there is a solution – KeyCloak must be disabled. Since this involves several steps, I refer you to the relevant post in Oracle's support portal under note KB495706 (simply enter this in the search field at the new portal of support.oracle.com).

Once disabling has been done, olvm-vmcontrol can then login with the user admin@internal and the CPU/thread binding to the VM can be configured as desired. For subsequent installations, the customer now knows that KeyCloak configuration should not be set to "Yes" when configuring the engine at the beginning.

In order to achieve the most distributed utilization of the CPU NUMAs possible, when pinning the threads, care should be taken to ensure that they are distributed as evenly as possible across the available NUMAs (in this case 2). VMs not intended for the operation of Oracle licenses can then be distributed across the remaining threads.

However, pinning the CPUs has one disadvantage: live migrations of a VM from one node to another are no longer supported. This means that the VM must be stopped, the definition edited (hosts), and the VM restarted. This is a (small organizational/high availability) price to pay in return for the license savings.