Showing posts with label Autoupgrade. Show all posts
Showing posts with label Autoupgrade. Show all posts

How to start preparing your database for the upgrade to Oracle database 23ai - even without an on-premises released database software!

 As many of you already know, Oracle's autoupgrade tool will be the only supported option for database upgrades in the future. This fact can also be found in the support notes of the 23ai database.

However, the autoupgrade tool - and most people don't know this - already supports the upgrade to Oracle database version 23ai in Release 24 (available since roundabout March of this year). The question therefore arises, how can you TODAY use the autoupgrade tool to optimally prepare your existing databases for the upgrade to 23ai ? Especially now, when the perhaps quiet time of the year is approaching, but the official release date of 23ai has been postponed to “sometime in calendar year 2025” (postponed once again)?

The preparation

First of all, the latest version of the autoupgrade.jar must be downloaded from Oracle Support and can be found at Document ID 2485457.1.
This is currently a build from October 2024 (24.7.241021). The autoupgrade.jar can next be copied to the server and tested for functionality. If no Java is installed on the server, the JDK from the highest Oracle Home release can be used. This has always worked so far.
Oracle's support for direct upgrades to 23ai is limited to 19c and 21c databases, this means older database versions cannot be prepared for a direct upgrade to 23ai using the autoupgrade tool. An intermediate upgrade (to 19c) is therefore necessary beforehand or, for example, data migration with Datapump Export-Import is a possible way to move from older database versions to 23ai. However, no pre-test can be one with datapump.

If the execution of autoupgrade.jar (java -jar autoupgrade.jar -version) works on the server, the next step is to create a config file for the database that is to be prepared for the 23ai upgrade.

The following information is important here:

  • the log directories (global and per DB)
  • Which SID should be tested/analyzed
  • The source Oracle Home - this needs to be a 19c or 21c database home
  • The target Oracle Home - is set, but this directory does not exist - the database is not yet available on-premises
  • Start_time and upgrade_node can be set by default (NOW/localhost)
  • Important - the autoupgrade tool normally takes the target_version from the target home. However, as this does not exist, it is important to set 23 fixed as target_version here.
autoupgrade config file  
Config file



 Analyzing the existing database

Once this preparation has been completed, the autoupgrade tool can now be run in Analyze mode against the source database. NO changes are made to the source database, this means this procedure is completely safe.
However, it is important that all pluggable databases (PDB) are open. PDBs that are in mount status are not analyzed.

Warning - PDB is mounted, not open

If all PDBs are open, there is no longer a warning and the corresponding autoupgrade job starts analyzing the existing database.

Job starts analyzing the database without warning

The “status” command can be used to display the progress of the analysis of all jobs on the Autoupgrade CLI. It shows 4 databases (1 CDB + Seed PDB + 2 user PDBs).

Status of all autoupgrade jobs running

To see the details of the individual databases, just add the job with number to the “status” command. Now one can see the stage progress per container (the 4 databases).


The job should be successfully completed after a few minutes. There are typically no errors in the analysis phase.

Job completed

The analysis result (in various formats, including JSON, text and HTML) is now available for the analyzed database in the Prechecks subdirectory of the log directories for the job. The next picture does show the HTML.

Overview HTML

The first overview already shows that there are components in the 19c database that would be removed during an upgrade to 23ai. These are remnants of Oracle Multimedia - marked yellow.

If one now looks in the detailed areas of the databases (CDB and PDBs), one will find valuable information on how to deal with such changed or removed components. The Autoupgrade tool recommends taking a look at various support notes, for example.

Notes to look at support.oracle.com documents

In the database analyzed here (which was already upgraded from a 12c environment), the autoupgrade tool also found a configuration of traditional auditing. Traditional auditing was first supplemented by Oracle some time ago with Unified Auditing and then replaced - or is now officially desupported for the first time, so that Unified Auditing must be switched to.
As the configuration of auditing can be a very time-consuming process, it is advisable to adapt the configuration on the existing database even before the upgrade planning officially starts after 23ai.

Finding Traditional Auditing

The autoupgrade tool may already find things as part of the POST check, things which then need to be  done on the 23ai database, such as changing the LOB type at an audit table. Of course, this cannot yet be prepared in the existing environment, but it does provide an indication of any manual work that may need to be done (and maybe any additional downtime) during a later upgrade.

Post Check - LOB Change on Audit Table

Incidentally, even though the screenshots are now all from a Linux environment, the analysis method was also tested with a 19.8 database under MS Windows. It should therefore be possible to use this procedure on all supported operating systems.

Now start prepare your database for a successful upgrade to database version 23ai.



DOAG Conference 2020 - Presentation is online

I have already uploaded the pdf of my presentation "Datenbankupgrades mit Autoupgrade über 2 Nodes" at my Slide Download Center (you can find that on the right side or using this Link). The presentation is in german, but if you have any question, write a comment / contact me.


Autoupgrade.jar throws error at analyze phase - UPG-1316

While testing the newest versions of autoupgrade.jar, I was running in an error (on my MS Windows environment, I haven't tested that for Linux, etc. yet). 

The error itself is shown as upg-1316, unexpected exception error:



I have seen two different causes, one is for checking disk space without using a Fast Recovery Area. Then you get this "Min_Archive_Dest_Size" error:

 
The same problem arises, when autoupgrade is doing a check for the FRA size - "Disk_Space_For_Recovery_Area".

Versions that are affected are 19.9.2 and 19.10.0, the issue does not happen with 19.7.3, 19.7.5 and 19.8.1 at the same database / server.

After investigating the issue with the Development, the issue was fixed within 4 days at the newest development version, which I have tested successfully. I don't know, when this version will be released and with which version number. If you do hit the problem, stay at 19.8.1 or go for a newer version than 19.10.0, as soon as it is released.
 
@Dev: Thank you for the quick fix!

DOAG Conference 2020

What crazy times we have. Most conferences are online only, some are also on-premises, but without the necessary hygiene concept to avoid corona (or without enough control), as I have seen at twitter. 

The big DOAG conference this year is an online conference AND an on-premises one - but with a big hygiene concept. Roundabout half of the presentations are streamed online, all others can only be seen at Nuremberg. 

I was selected as a speaker again, which makes me still proud. This year, I will talk about migrating and upgrading a database using the autoupgrade tool over to nodes. I will show this with a Windows environment - because it's a little bit more to do manually, but I will also cover the things you need to do at Linux (as this means I will just say: you don't need to do that step on Linux 😉). Unfortunately, the presentation will not be streamed online, this means, if you want to see that, you have to go to Nuremberg (and speak german, as my presentation will be in german). 

So, stay safe and see you there - hopefully!




Autoupgrade.jar on MS Windows: Database is open but "Error Cause: The database <--> is currently open with status of CLOSED."

People are using autoupgrade.jar to upgrade the Oracle databases more and more, this is the good news.

But the more people using autoupgrade.jar - the more things arise, while working with it. That's the bad news (or maybe it's also good news, because we do get a better feeling if things happen, WHY they happen).
At the blog of my employer, one was asking for a solution of the problem that autoupgrade in analyze mode is reporting a status of closed, but the database is open. He is connecting with an application and sqlplus, tnsping, etc. is working.

He was running autoupgrade on a 12c database on a MS Windows server in analyze mode, when he hits this error, also with the latest downloaded version of autoupgrade.jar (you should always download the latest autoupgrade.jar from Oracle Support).

The error was similar to this one:

Unsupported language [de], defaulting to English
AutoUpgrade tool launched with default options
------------ ERROR ------------
Error Cause: The Database orcljso2 appears to be down or open with the incorrect binaries for the mode ANALYZE. Ensure it is open with C:\app\oracle\product\12.2.0\dbhome_1

------------ ERROR ------------
Error Cause: The database orcljso2 is currently open with a status of CLOSED.  For ANALYZE mode, it needs to be open in one of the following: [OPEN, MOUNTED].

Unable to connect to database orcljso2 for entry upg1


To be honost, this is NOT an autoupgrade error, because there are some constraints for autoupgrade you have to fullfil, so it can do its work. One of the main things is, autoupgrade does connect to the database using "sqlplus / as sysdba". There isn't any tnsnames.ora or password involved. If this doesn't work in your environment, you need to fix that. It is also not related to MS Windows, but in Linux environments, typically connecting "/ as sysdba" is working (as you do everything using your "oracle" user and the default groups).

But how to know, what is the root problem, as autoupgrade does only tells you "the database is closed"?
Autoupgrade does write very, very detailed logs (or better traces). As you have specified a log directory for autoupgrade, you can find the logs there.
There are 2 possibilities where autoupgrade does put the logs into. If you don't specify a local entry, it puts it into the global directory, otherwise in the local one of your database specification in the config file.
  • global.autoupg_log_dir=C:\app\oracle\autoupgrade
  • upg1.log_dir=C:\app\oracle\autoupgrade\orcljso2
In my environment, I can then find the log files "autoupgrade.log" and "autoupgrade_err.log" in the directory C:\app\oracle\autoupgrade\cfgtoollogs\upgrade\auto

Some examples, of what can go wrong while (not) connecting with "/ as sysdba" and what "autoupgrade.log" does show as output.

1. A missing part in the database home of the source

I have specified
  • upg1.source_home=C:\app\oracle\product\12.2.0\
while the right entry would be
  • upg1.source_home=C:\app\oracle\product\12.2.0\dbhome_1
The entry at the log file then looks like
2020-07-09 14:52:45.844 ERROR Invalid value for source_home [C:\app\oracle\product\12.2.0\dbhome1] - Utilities.validateDirectory
2020-07-09 14:53:19.251 ERROR Unexpected IOException [null] [SELECT STATUS FROM SYS.V$INSTANCE;] - ExecuteSql$SQLClient.run
java.io.IOException: Cannot run program "C:\app\oracle\product\12.2.0\bin\sqlplus" (in directory "C:\Users\JOERG~1.SOB\AppData\Local\Temp"): CreateProcess error=2, The System cannot find the File specified
    at java.lang.ProcessBuilder.start(Unknown Source)
    at oracle.upgrade.commons.processes.ExecuteProcess.startSqlPlusProcess(ExecuteProcess.java:315)
    at oracle.upgrade.commons.sql.ExecuteSql$SQLClient.run(ExecuteSql.java:882)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: CreateProcess error=2, The System cannot find the File specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(Unknown Source)
    at java.lang.ProcessImpl.start(Unknown Source)
    ... 4 more

2020-07-09 14:53:19.259 ERROR The database orcljso2 appears to be down or open with the incorrect binaries for the mode ANALYZE,
ensure it is open with C:\app\oracle\product\12.2.0 - UpgradeConfigDBValidator.initializeIsDBUp
2020-07-09 14:53:19.262 ERROR The database orcljso2 is currently open with a status of CLOSED. 
For ANALYZE mode, it needs to be open in one of the following: [OPEN, MOUNTED]. - UpgradeConfigDBValidator.initializeIsDBUp

Solution: Correct the Source home path in your configuration. I think the same error can happen, if the Source Home is not the same as the home the service was started from.
Note: If you specify e.g. the home and miss the "_", you do get another error message:
Invalid value for source_home [C:\app\oracle\product\12.2.0\dbhome1]
It was not possible to validate the directory for source_home of entry upg1)

2. Wrong SID in the configuration file

If you have specified a wrong SID in the configuration file, the error is a little bit hidden.

2020-07-16 14:33:53.786 ERROR The database orcljso appears to be down or open with the incorrect binaries for the mode ANALYZE,
ensure it is open with C:\app\oracle\product\12.2.0\dbhome_1 - UpgradeConfigDBValidator.initializeIsDBUp
2020-07-16 14:33:53.798 ERROR The database orcljso is currently open with a status of CLOSED.  For ANALYZE mode, it needs to be open in one of the following: [OPEN, MOUNTED]. - UpgradeConfigDBValidator.initializeIsDBUp
2020-07-16 14:33:53.803 INFO Content of the file C:\app\oracle\product\12.2.0\dbhome_1\sqlplus\admin\glogin.sql is:
--
-- Copyright (c) 1988, 2005, Oracle.  All Rights Reserved.
--
-- NAME
--   glogin.sql
--
-- DESCRIPTION
--   SQL*Plus global login "site profile" file
--
--   Add any SQL*Plus commands here that are to be executed when a
--   user starts SQL*Plus, or uses the SQL*Plus CONNECT command.
--
-- USAGE
--   This script is automatically run
-- - GLoginLogger.logGlogin


So it halts at this state in the log. The only thing where you can see that you have specified a wrong SID is at the error itself (my database is ORCLJSO2 and not ORCLJSO).

Solution: Correct the SID in your configuration file.
Note: If the SID is correct but the dbname is wrong, the autoupgrade tool connects to the database and starts working.

Wrong configuration is:
    upg1.dbname=orcljso
    upg1.sid=orcljso2

Output of the autoupgrade.log at job level is:
   DataBase Name:orcljso2
   Sid Name     :orcljso2

3. Wrong SQLNET configuration or ORA_DBA group missing

Another reason, why the autoupgrade tool cannot connect to the database is that you have some values at your sqlnet configuration, which don't allow this or your (windows shell) user is not part of the ORA_DBA group.

As example, you have specified:
SQLNET.AUTHENTICATION_SERVICES = (NONE)

The output at the logfile is pretty same as for a wrong SID:

2020-07-16 15:06:19.805 ERROR The database orcljso2 appears to be down or open with the incorrect binaries for the mode ANALYZE,
ensure it is open with C:\app\oracle\product\12.2.0\dbhome_1 - UpgradeConfigDBValidator.initializeIsDBUp
2020-07-16 15:06:19.821 ERROR The database orcljso2 is currently open with a status of CLOSED.  For ANALYZE mode, it needs to be open in one of the following: [OPEN, MOUNTED]. - UpgradeConfigDBValidator.initializeIsDBUp
2020-07-16 15:06:19.825 INFO Content of the file C:\app\oracle\product\12.2.0\dbhome_1\sqlplus\admin\glogin.sql is:
--
-- Copyright (c) 1988, 2005, Oracle.  All Rights Reserved.
--
-- NAME
--   glogin.sql
--
-- DESCRIPTION
--   SQL*Plus global login "site profile" file
--
--   Add any SQL*Plus commands here that are to be executed when a
--   user starts SQL*Plus, or uses the SQL*Plus CONNECT command.
--
-- USAGE
--   This script is automatically run
-- - GLoginLogger.logGlogin



Solution: Check your SQLNET configuration, e.g. set SQLNET.AUTHENTICATION_SERVICES = (NTS), start "lusrmgr" to check the the user account is part of the ORA_DBA group. Be aware, that there can be also some kind of "ORA_<ORACLE_HOME>_DBA" in the "lusrmgr"!

As it is a little bit more special using MS Windows, the documentation for authentication can be found here.
If you use a domain account, have a special look at the documentation part "Overview of Operating System Authentication Enabled at Installation". You need to put this domain account into ORA_DBA AND grant local administrative rights.

Hope that helps you running autoupgrade.jar (even on MS Windows) successful.



Autoupgrade.jar - Skip Fixups Step-by-Step

Autoupgrade will be more and more important to upgrade Oracle databases to the newer releases (18c, 19c, 20c). Mike Dietrich e.g. has told at the DOAG conference 2019, that autoupgrade will be the future of Oracle Exadata database upgrades also.

The autoupgrade tool is a very quick evolving tool and it becomes more stable and more helpful with every new release (nearly every quarter, sometimes every 3 or 4 weeks, you can download a new version from support.oracle.com).

But there are still things, you should know and which can be misunderstood at the documentation, if you start first with the autoupgrade tool. And then you may don't get the result(s) you are expecting.

One of the things I write today about is how you can skip automatically running fixups.This is e.g. useful if you want to prepare the database with the pre-upgrade part and run the autoupgrade.jar later on at a specific time (e.g. at the weekend or late at the evening) and not in dialog mode.This means, you spent less time at the "upgrade" phase itself (e.g. part of the normal prefixups is gathering statistics).
It is also helpful if you run autoupgrade.jar with "deploy" or "upgrade" mode, because BOTH modes automatically run POST fixups (and you may want to skip one or more and do them (later) manually).

Like everytime, first step is to download the latest autoupgrade.jar from MyOracleSupport!

Then, you check the java environment and create your own configuration file. (A small how to (see preparation at https://a-different-view-by-js.blogspot.com/2019/12/everything-goes-auto-1.html) can be found within my slide deck from DOAG 2019 "Alles wird Auto": https://a-different-view-by-js.blogspot.com/p/blog-page.html or at Mikes Blog https://mikedietrichde.com/2019/06/13/create-and-adjust-the-config-file-for-autoupgrade-19c/).

I typically use "create_sample_file config" and change that to fit my needs.


After everything is specified in the config file, the autoupgrade.jar is started in analyze mode.


 After some minutes the job has finished. Now it is essential to note down the job number, because you will need it later on in your configuration file. 

 

At the specified log path, you have a subfolder <job-number>/prechecks with a couple of files.


First is, check carefully ALL the issues at the preupgrade-HTML report, because you want to find the ones you don't want the autoupgrade.jar let fix "automatically" (in pre or post phase) while running later on. 

Then, you need to edit the fixup file in the same directory. It's the <database>_checklist.cfg file. 

E.g. if you want to skip the "DICTIONARY_STATS" fixup, you have to change the part at [runfix] from YES (default, as you can see in the picture) to NO and save your file.



Now have a look at the documentation for overriding the default fixups. The documentation clearly states "By default, AutoUpgrade uses the most recent generated file." So this means, if you run the autoupgrade tool later on with the "deploy" mode, the new job creates a new config file and does not recognize yours automatically!


If you use the autoupgrade mode "upgrade" then the prefixup phase is completely skipped, so also the fixups you want to get done will not run.

You have to create a new entry inside of your autoupgrade-configuration.
The parameter is called "checklist" and there you can specify the checklist configuration with the modified prefixup steps.
I like to copy/paste that from another default configuration file I do have, so I only need to uncomment the last line and put the right directory/filename into the configuration.



So the configuration for checklist in my file after the change is:

upg1.checklist=c:\oracle\autoupgrade\logs\ECKES\ECKES\100\prechecks\eckes_checklist.cfg

I typically fix also some other things manually, so I let run the analyze phase more than once and skip all the manually fixed things in the eckes_checklist.cfg of my FIRST job. This makes it easier to handle and I don't need to remember which is the fixup skip configuration file I have specified in my autoupgrade configuration. 

Last step is to run the autoupgrade again in "deploy" mode. It will take the checklist configuration file specified in your config instead of a new one, makes the fixups in there, skips the one with "NO" and proceeds with the upgrade, makes the post fixups which are not set to "NO" and finishes the upgrade.

Isn't that easy in comparison with all the manual things you had to do before when upgrading your database?




MS Windows, Virtual Service Accounts and Non-CDB to CDB/PDB conversion

Sometimes, when you test something, you do struggle about something else. This happened to me 🙋while testing the autoupgrade.jar (I will also write a blog update in the next days about the latest version) with a pre-release lab version (thanks to Mike Dietrich for sharing it). 
Part of this pre-release version is the conversion of a non-cdb database into a PDB and plug that automatically into an already existing CDB.
(explanation for the screenshots: ECKES is a 12.2 NON-CDB database, ECKES1 is an empty 19c CDB).

👉 By the way - 19c is the last version where NON-CDB architecture is supported AND you do have 3 (THREE!) pdbs free per CDB with 19c and later. Even with Standard Edition 2 licenses! That's great, so use it!

But back to my tests, what happened?
The old oracle database (ECKES) created a XML metadata file after it was upgraded to 19c. You need this file to plug-in the database later on into an already existing CDB, but the CDB (ECKES1) wasn't able to read this XML metadata file. This can happen in every environment, not only with the autoupgrade-tool!

I've got the following error on every run, even if I did the NON-CDB to PDB migration by myself:
ORA-65026 XML metadata file error : LPX-00202: could not open

(By the way, some people are getting ORA-27041: unable to open file for this error.)

While testing manually, I have found out later on, that the CDB database was able to read the xml only, when it was placed into a subdirectory of this Oracle database version, e.g. %ORACLE_HOME%\database, but not, if it was automatically written by the autoupgrade tool to it's own directories or, e.g., to my c:\temp.

You can see it at the following screenshot, where DBMS_PDB.CHECK_PLUG_COMPATIBILITY succeeded at the first call (c:\oracle\product\19.3.0.0\database\Eckes.xml), but not at the second, the home of the autoupgrade tool (c:\oracle\product\upg_logs\ECKES\...).


It took me some weekend hours to find out why this happens. 
With Oracle 19c, I have installed the database software using a "virtual account", I think, most people will do it that way to avoid to create some special users in Windows. After that, I used the dbca to create an empty CDB for my tests.

The "old" database software is running as "local system" and also the autoupgrade-service from the new home was doing so, while the fresh created 19c CDB was running with the "virtual account" - as "NT Service\OracleService<MySID>". 


When a XML metadata file is created, it is created as "Local System", not as "NT Service\OracleService<MySID>" and therefore, the fresh created file lacks the read/write permission of the ServiceAccounts (ORA_OraDB19Home1_SVCACCTS). 
You can see here, the XML metadata file created at the %ORACLE_HOME% subdirectory does have the rights,


while the XML metadata file created at the autoupgrade.jar home don't have the rights of the ServiceAccount.


My first thoughts were, why is it not enough to have "ORA_DBA" as group for both files, but that's something which is built-in into Windows unfortunately. 

How can you fix that?
If you are doing a NON-CDB to PDB conversion manually, it's easy to choose the right directory - e.g. place the file in the database subdirectory of your new %ORACLE_HOME%. If you want to do it more automated or you want to store the xml metadata file somewhere else, you have to change the Service of the CDB, where you want the database be plugged into. You need to set it to "Local System" for the time of the migration. 



After the migration is done, you can set the service back to the virtual user service account. 

⚡A comment on that: Be aware that you need to restart the service (which restarts your database instance) after you have changed the "logon as" in the MS Windows Service properties!

Another way to fix issues with Oracle databases on MS Windows is to switch to Linux environments, e.g. Oracles database appliances. You can avoid a lot things where you struggle on Windows. 😉

Hope that will someone help to avoid hours of investigation.
 

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 5 - DBA still matters...

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

As every other DBA, I am not error free. Any good administrator is doing things wrong sometimes, this is normal, especially if you do a lot stuff in a very short time and you are very busy.

Well, to be honest, things happened also because I made some mistakes. E.g. at one of my tests (with GRP) I didn't used an administrator command for running the autoupgrade tool in MS Windows. MS Windows was quitting that with an OpenSCManager failed, Access denied error. (Slide 48).
As I tried to use "restore" in the console, Windows again throwed ne the error. What should I do now? I am lost in the middle of nowhere (ok, I could have went back to my VM snapshot, but I wanted to test the tool).
I exited the console (it was warning me that 1 job will stop) (Slide 49) and started the autoupgrade tool again in a administrator command window.

It attached automatically to the old job and I started a restore again. I don't know if there was something still in an undefined state, but first nothing happens.
Starting the "restore" a second time was successful. I have watched the job status saying it's flashing back parallel with the alert.log and after a while it was back on the pre-upgrade state (Slide 50-54).
That's a very robust behaviour, even if the first restore seemed to do nothing (I think it was able to fix a broken state, so the second execution could succeed).

A second error I did was arising while using the "add_after_upgrade_pfile" parameter (Slide 55). I had a stupid spelling mistake in that file. So autoupgrade finished, but the database was not started.
A look at the detailled log file and I have seen that the pfile wasn't copied / changed to spfile due to this error. I then copied the corresponding init.ora to the dbs/database subdirectory, created a spfile out of it and everything was fine. No need to roll back using the GRP...

If you are on MS Windows, one last thing happened while creating the Windows service in the new Oracle Home with oradim. I was lazy and just copied the whole command from the preupgrade.html.
The DIM-00003 error I got doing this is misleading.
There is no missing parameter - you can't specify the spfiles name after the "-spfile" switch (Slide 57).

My personal summary (Slide 58):

Always, always, always download the newest version of the autoupgrade-tool. As it evolves and improves fast, check again often, if there is a new version of it.

Well, "everything automatic" is something relative. You can still run in some hassles (as I did on MS Windows), you maybe try things which are non-production-use and so on - not only at the autoupgrade-tool things can happen, but also at the database level or because you as administrator are doing something wrong (like the most of my examples show). By the way, there was a whole session at the DOAG conference what can happen (from a database perspective) during an upgrade (and how you can find/fix the errors).

The tool is hands free, yes, but personally I would prefer to keep an eye on it while using it in production the first times. As always: If you test your configuration well in your environment and everything works there, then it works also in production! This is, where I used the VM snapshots in my test environments a lot to test as much as I can in a short timeframe.

What I was really impressed about is, how stable and robust a tool is, that is not that long on the market, how good resume and restore are working and, again, how detailled the logs are.

As we do have a lot customers at Switzerland running Windows with (Swiss) German or (Swiss) French language, we need to test the newer versions of the autoupgrade tool. 
Linux environments of our customers are all in english and also don't have the MS Windows service problems, so it's easier for us to use the tool there. If you have more than 1 or 2 databases and these are similiar (especially the environment), it does really make sense to step into using the tool.

Hope you enjoyed the series. I will make some more tests next year. By the way, Mike Dietrich told at his session, that autoupgrade.jar will be the tool for upgrading exadata databases in the future!


Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 4 - Non-CDB to PDB conversion after upgrade

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

To convert a non-CDB to a PDB automatically after the upgrade, you can set "target_cdb" (first create an empty CDB or use an already existing one) in your configuration file.

While testing that, I was getting an error that the guaranteed restore point couldn't be dropped. But it wasn't existing, as I have set "restoration=no" in my configuration (Slide 42, 43). Well, I opened another SR that this step fails on Windows (on Linux I only had already CDBs, therefore I couldn't test that there). Mike Dietrich than told me that this feature is shown on his blog, but it does not exists in the documentation. As it is not official released yet, it should show the future of the tool.

As Mike is right, this is not a scenario for a SR,  I closed mine immediately (Slide 44), for sure, there is no issue with a non-production-use of the tool. Only 2 or 3 days (!) later (now speaking with the development directly) I got a newer pre-version from Mike personally! So I was able to go forward with my tests.

The autoupgrade.jar tool then stepped over the GRP problem but was running into a plug-in violation (Slides 45/46). The, again, very good logfiles recorded an error, that the description.xml could not be read. The description.xml file did exist, so was it another non-production-use thing?
Development helped me analysing it and so we found out together that the problem seems to be related to the database version 19c (on Windows). If we put the description.xml into ORACLE_HOME/database the file can be found, but not at it's original place (Slide 47).
While the one thing is database bound, the fixes for the GRP will for sure be seen in the December or January release of the tool.


Edit (December, 2nd): What happened after the DOAG conference: With development feedback and some long testing runs over the weekend I have figured out, where the problem starts (it's a Windows/Oracle Database rights issue). I am waiting for the developers assessment on that, maybe I write another, different post, which is only a little bit related to autoupgrade.jar as the problem may hit you at every MS Windows environment.

Edit (May 14th, 2020): The newest version of autoupgrade.jar does now support Non-CDB to PDB conversion officially. I have not tested that yet.

The next post for my DOAG session can be found here.

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 3 - Deploy mode

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

Most of the tests I did in the deploy mode, because I used snapshots (instead of backups) a lot while testing, so it was (also with SE2) easy to revert to the last state. Snapshots have been my best friends as I didn't want to wait for the restore/recovery...

As I said, I was sometimes a little bit confused that with "lsj" e.g. status showed "finished" while message said "starting" (Slide 26), so I preferred the detailed
status -job <jobno> to look at the status of the job.
There are a lot details you can see, e.g. the operation, what is already done, how many stages are pending, etc., the status command at a database with CDB architecture also shows all PDBs (including seed) which are in touch by the tool (Slide 27 and Slide 28).

As the first tests without archivelog mode succeeded, I tried to use "restoration=YES" in my configuration. This means, you use a guaranteed restore point (GRP, Enterprise Edition only) where you can roll back the database to, if something goes wrong.

For using GRP you need a Fast Recovery Area (FRA). Most of our customers don't use FRA by default, so this is something we need to introduce in our case, exclusively and only for the time of the upgrade.

Testing with GRP, after some minutes, I have seen with another "status" command, that there is a warning. The recovery area is nearly full and the upgrade could "appear hung" if the recovery area is filled up (Slide 29).
To find out, what happens, if the FRA is full, I started later on another autoupgrade run with a far too small FRA (Slide 30).
Thankfully, the tool did not hung, but throwed an unexpected exception error. In the very detailed logs (it's more a debug trace than a normal logfile) you can easily find the error: "MIN_RECOVERY_AREA_SIZE" is to small.

Fixing that can be done online with "alter system", but only with scope=MEMORY. The database is restarted a couple of times with a pfile while it runs through the deploy mode. (Slide 31)
You can either wait for this error happen more than once or, what I did, change the different pfiles, e.g. "during_upgrade_pfile_catctl.ora" (which is located at the dbupgrade subdirectory of your job) in addition. After resuming the job (Slide 33), the autoupgrade finished fine (Slides 34, 35, 38).

As the size of the FRA can be critical, it's good to set it big enough before you begin with the deploy mode. You could, e.g. use the "before_action" parameter in your configuration to change the FRA size or you can use add_during_upgrade_pfile, for example. (Slide 32)

While the tool is running, you can see with the status command that it's e.g. upgrading CDB$ROOT, upgrading PDB$SEED or your user PDBs (in parallel) (Slides 34/35).
There is also a "tasks" command (Slide 36) where you can see all the tasks that are running or waiting, but I didn't see anything there I think I would need at my daily work.

If you run the autoupgrade.jar on MS Windows, a temporary service in the target Oracle Home is created (Slide 37), dropped afterwards and you have to drop the old original service and create the new one manually, that's fine.
When the upgrade has finished, don't forget to drop the GRP by yourself, otherwise the FRA will be filled up completely (you could set a parameter in your configuration to do this automatically, but I prefer to test the upgraded environment first) (Slide 38).

On MS Windows I had an unexpected error again on my german environment as stopping the service for the database to be upgraded was successful, but the tool was raising an error (Slide 40). I have opened another SR for that and I think it will be fixed soon (there is a bug open for that). After installing the US language pack and changing the Oracle user account to "US" language, everything worked fine.

The next post will be about the Non-CDB to PDB conversion with the autoupgrade tool.