Register of a standby database fails (within Oracle GI/repository) on Oracle Database appliance

A couple of days ago I tried to register a standby database on ODA with a newer relase of ODA (in my case 18.8) and Grid Infrastructure. I followed the procedure of the Dbvisit Standby documentation.
In there, it is described that you need to start the database on the standby environment as "read only".
Afterwards, you should be able to register the database in the Grid Infrastructure using odacli.
The procedure is described at this manual in detail and should work with Dbvisit Standby Version 8 and 9.

Unfortunately, if you run the ODACLI to register the database, it fails.

odacli register-database -c OLTP -s odb6 -t SI -sn <> -p <somepassword>

The job at the ODA is started, but it errors out:

Job details
ID: 432c4c54-0d2f-XXXX-XXXX-aa15f27ea3ab
Description: Database service registration with db service name: <>
Status: Failure
Created: May 5, 2020 3:40:06 PM CEST
Message: DCS-10001:Internal error encountered: Failed to archive log all
Session altered.

alter system archive log current
ERROR at line 1:
ORA-01109: database not open

This also means, that you see the "failed" configurations in odacli list-databases.

I have created a Service Request with Dbvisit to check the manual/the process of registering the database on ODA. At the meantime, the only solution is to open the database "read/write" to register it at the ODA.
To open the standby read/write it's necessary to "switchover" the primary to the standby you want to register and run odacli register-database on the -now- primary. Afterwards you can switch the roles of the databases again, so your original standby is a standby again at the end.

For your application that means, that it is necessary to have a small maintenance window where the end users can't work.

I will edit this blog post as soon as I do have more information, so come back regularly if you are interested in any news.

Autoupgrade.jar - Skip Fixups Step-by-Step

Autoupgrade will be more and more important to upgrade Oracle databases to the newer releases (18c, 19c, 20c). Mike Dietrich e.g. has told at the DOAG conference 2019, that autoupgrade will be the future of Oracle Exadata database upgrades also.

The autoupgrade tool is a very quick evolving tool and it becomes more stable and more helpful with every new release (nearly every quarter, sometimes every 3 or 4 weeks, you can download a new version from

But there are still things, you should know and which can be misunderstood at the documentation, if you start first with the autoupgrade tool. And then you may don't get the result(s) you are expecting.

One of the things I write today about is how you can skip automatically running fixups.This is e.g. useful if you want to prepare the database with the pre-upgrade part and run the autoupgrade.jar later on at a specific time (e.g. at the weekend or late at the evening) and not in dialog mode.This means, you spent less time at the "upgrade" phase itself (e.g. part of the normal prefixups is gathering statistics).
It is also helpful if you run autoupgrade.jar with "deploy" or "upgrade" mode, because BOTH modes automatically run POST fixups (and you may want to skip one or more and do them (later) manually).

Like everytime, first step is to download the latest autoupgrade.jar from MyOracleSupport!

Then, you check the java environment and create your own configuration file. (A small how to (see preparation at can be found within my slide deck from DOAG 2019 "Alles wird Auto": or at Mikes Blog

I typically use "create_sample_file config" and change that to fit my needs.

After everything is specified in the config file, the autoupgrade.jar is started in analyze mode.

 After some minutes the job has finished. Now it is essential to note down the job number, because you will need it later on in your configuration file. 


At the specified log path, you have a subfolder <job-number>/prechecks with a couple of files.

First is, check carefully ALL the issues at the preupgrade-HTML report, because you want to find the ones you don't want the autoupgrade.jar let fix "automatically" (in pre or post phase) while running later on. 

Then, you need to edit the fixup file in the same directory. It's the <database>_checklist.cfg file. 

E.g. if you want to skip the "DICTIONARY_STATS" fixup, you have to change the part at [runfix] from YES (default, as you can see in the picture) to NO and save your file.

Now have a look at the documentation for overriding the default fixups. The documentation clearly states "By default, AutoUpgrade uses the most recent generated file." So this means, if you run the autoupgrade tool later on with the "deploy" mode, the new job creates a new config file and does not recognize yours automatically!

If you use the autoupgrade mode "upgrade" then the prefixup phase is completely skipped, so also the fixups you want to get done will not run.

You have to create a new entry inside of your autoupgrade-configuration.
The parameter is called "checklist" and there you can specify the checklist configuration with the modified prefixup steps.
I like to copy/paste that from another default configuration file I do have, so I only need to uncomment the last line and put the right directory/filename into the configuration.

So the configuration for checklist in my file after the change is:


I typically fix also some other things manually, so I let run the analyze phase more than once and skip all the manually fixed things in the eckes_checklist.cfg of my FIRST job. This makes it easier to handle and I don't need to remember which is the fixup skip configuration file I have specified in my autoupgrade configuration. 

Last step is to run the autoupgrade again in "deploy" mode. It will take the checklist configuration file specified in your config instead of a new one, makes the fixups in there, skips the one with "NO" and proceeds with the upgrade, makes the post fixups which are not set to "NO" and finishes the upgrade.

Isn't that easy in comparison with all the manual things you had to do before when upgrading your database?

ora-07217 / rman-03009 on MS Windows with UNC path that contains dollar sign

A customer of mine had a strange behaviour after upgrade of a database to 19c on MS Windows. He was running in an error while trying to "delete obsolete" in rman.

The error he got was
rman-03009: failure of delete command on ORA_DISK_1 channel
ORA-07217: sltln: environment variable cannot be evaluated.

ORA-07217: sltln: environment variable cannot be evaluated

The reason behind that?
Well, he used a special kind of backup location with a UNC path.

As you can see, the UNC path is located at another server using the "E$" share for the e:\ drive on that server. With 12.1 the usage of the dollar sign wasn't a problem - with 19c, after the upgrade, the database started to throw the "environment variable" error.

There wasn't a chance to get rid of these backups/backup entries in the controlfile and the rman catalog using any crosscheck - delete expired (even with force) - delete obsolete combination.

The only chance we had was to unregister the database from the rman catalog, deleted all the stuff regarding backup from the controlfile repository and register the database again with the rman catalog. After this, we moved all "old" backup files to a new folder and cataloged them again.

And here is how to get rid of the "problematic" entries from the controlfile:

First you have to connect to the database as SYSDBA. then you can run the following select:

 select rownum-1, type from v$controlfile_record_section;  

You need to find the section with the type of the backup things you want to delete. Be aware that you can corrupt your controlfile and your database if you remove the wrong things!

To clear the section in the controlfile you must run the following for every section you want to delete:

 execute sys.dbms_backup_restore.ResetCfileSection(<your_section_id>);

You should be able to delete all backup related information with that. If you run into an error while you reset other things of the controlfile section, then this part is protected.

MS Windows, Virtual Service Accounts and Non-CDB to CDB/PDB conversion

Sometimes, when you test something, you do struggle about something else. This happened to me πŸ™‹while testing the autoupgrade.jar (I will also write a blog update in the next days about the latest version) with a pre-release lab version (thanks to Mike Dietrich for sharing it). 
Part of this pre-release version is the conversion of a non-cdb database into a PDB and plug that automatically into an already existing CDB.
(explanation for the screenshots: ECKES is a 12.2 NON-CDB database, ECKES1 is an empty 19c CDB).

πŸ‘‰ By the way - 19c is the last version where NON-CDB architecture is supported AND you do have 3 (THREE!) pdbs free per CDB with 19c and later. Even with Standard Edition 2 licenses! That's great, so use it!

But back to my tests, what happened?
The old oracle database (ECKES) created a XML metadata file after it was upgraded to 19c. You need this file to plug-in the database later on into an already existing CDB, but the CDB (ECKES1) wasn't able to read this XML metadata file. This can happen in every environment, not only with the autoupgrade-tool!

I've got the following error on every run, even if I did the NON-CDB to PDB migration by myself:
ORA-65026 XML metadata file error : LPX-00202: could not open
While testing manually, I have found out later on, that the CDB database was able to read the xml only, when it was placed into a subdirectory of this Oracle database version, e.g. %ORACLE_HOME%\database, but not, if it was automatically written by the autoupgrade tool to it's own directories or, e.g., to my c:\temp.

You can see it at the following screenshot, where DBMS_PDB.CHECK_PLUG_COMPATIBILITY succeeded at the first call (c:\oracle\product\\database\Eckes.xml), but not at the second, the home of the autoupgrade tool (c:\oracle\product\upg_logs\ECKES\...).

It took me some weekend hours to find out why this happens. 
With Oracle 19c, I have installed the database software using a "virtual account", I think, most people will do it that way to avoid to create some special users in Windows. After that, I used the dbca to create an empty CDB for my tests.

The "old" database software is running as "local system" and also the autoupgrade-service from the new home was doing so, while the fresh created 19c CDB was running with the "virtual account" - as "NT Service\OracleService<MySID>". 

When a XML metadata file is created, it is created as "Local System", not as "NT Service\OracleService<MySID>" and therefore, the fresh created file lacks the read/write permission of the ServiceAccounts (ORA_OraDB19Home1_SVCACCTS). 
You can see here, the XML metadata file created at the %ORACLE_HOME% subdirectory does have the rights,

while the XML metadata file created at the autoupgrade.jar home don't have the rights of the ServiceAccount.

My first thoughts were, why is it not enough to have "ORA_DBA" as group for both files, but that's something which is built-in into Windows unfortunately. 

How can you fix that?
If you are doing a NON-CDB to PDB conversion manually, it's easy to choose the right directory - e.g. place the file in the database subdirectory of your new %ORACLE_HOME%. If you want to do it more automated or you want to store the xml metadata file somewhere else, you have to change the Service of the CDB, where you want the database be plugged into. You need to set it to "Local System" for the time of the migration. 

After the migration is done, you can set the service back to the virtual user service account. 

⚡A comment on that: Be aware that you need to restart the service (which restarts your database instance) after you have changed the "logon as" in the MS Windows Service properties!

Another way to fix issues with Oracle databases on MS Windows is to switch to Linux environments, e.g. Oracles database appliances. You can avoid a lot things where you struggle on Windows. πŸ˜‰

Hope that will someone help to avoid hours of investigation.