Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 5 - DBA still matters...

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

As every other DBA, I am not error free. Any good administrator is doing things wrong sometimes, this is normal, especially if you do a lot stuff in a very short time and you are very busy.

Well, to be honest, things happened also because I made some mistakes. E.g. at one of my tests (with GRP) I didn't used an administrator command for running the autoupgrade tool in MS Windows. MS Windows was quitting that with an OpenSCManager failed, Access denied error. (Slide 48).
As I tried to use "restore" in the console, Windows again throwed ne the error. What should I do now? I am lost in the middle of nowhere (ok, I could have went back to my VM snapshot, but I wanted to test the tool).
I exited the console (it was warning me that 1 job will stop) (Slide 49) and started the autoupgrade tool again in a administrator command window.

It attached automatically to the old job and I started a restore again. I don't know if there was something still in an undefined state, but first nothing happens.
Starting the "restore" a second time was successful. I have watched the job status saying it's flashing back parallel with the alert.log and after a while it was back on the pre-upgrade state (Slide 50-54).
That's a very robust behaviour, even if the first restore seemed to do nothing (I think it was able to fix a broken state, so the second execution could succeed).

A second error I did was arising while using the "add_after_upgrade_pfile" parameter (Slide 55). I had a stupid spelling mistake in that file. So autoupgrade finished, but the database was not started.
A look at the detailled log file and I have seen that the pfile wasn't copied / changed to spfile due to this error. I then copied the corresponding init.ora to the dbs/database subdirectory, created a spfile out of it and everything was fine. No need to roll back using the GRP...

If you are on MS Windows, one last thing happened while creating the Windows service in the new Oracle Home with oradim. I was lazy and just copied the whole command from the preupgrade.html.
The DIM-00003 error I got doing this is misleading.
There is no missing parameter - you can't specify the spfiles name after the "-spfile" switch (Slide 57).

My personal summary (Slide 58):

Always, always, always download the newest version of the autoupgrade-tool. As it evolves and improves fast, check again often, if there is a new version of it.

Well, "everything automatic" is something relative. You can still run in some hassles (as I did on MS Windows), you maybe try things which are non-production-use and so on - not only at the autoupgrade-tool things can happen, but also at the database level or because you as administrator are doing something wrong (like the most of my examples show). By the way, there was a whole session at the DOAG conference what can happen (from a database perspective) during an upgrade (and how you can find/fix the errors).

The tool is hands free, yes, but personally I would prefer to keep an eye on it while using it in production the first times. As always: If you test your configuration well in your environment and everything works there, then it works also in production! This is, where I used the VM snapshots in my test environments a lot to test as much as I can in a short timeframe.

What I was really impressed about is, how stable and robust a tool is, that is not that long on the market, how good resume and restore are working and, again, how detailled the logs are.

As we do have a lot customers at Switzerland running Windows with (Swiss) German or (Swiss) French language, we need to test the newer versions of the autoupgrade tool. 
Linux environments of our customers are all in english and also don't have the MS Windows service problems, so it's easier for us to use the tool there. If you have more than 1 or 2 databases and these are similiar (especially the environment), it does really make sense to step into using the tool.

Hope you enjoyed the series. I will make some more tests next year. By the way, Mike Dietrich told at his session, that autoupgrade.jar will be the tool for upgrading exadata databases in the future!


Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 4 - Non-CDB to PDB conversion after upgrade

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

To convert a non-CDB to a PDB automatically after the upgrade, you can set "target_cdb" (first create an empty CDB or use an already existing one) in your configuration file.

While testing that, I was getting an error that the guaranteed restore point couldn't be dropped. But it wasn't existing, as I have set "restoration=no" in my configuration (Slide 42, 43). Well, I opened another SR that this step fails on Windows (on Linux I only had already CDBs, therefore I couldn't test that there). Mike Dietrich than told me that this feature is shown on his blog, but it does not exists in the documentation. As it is not official released yet, it should show the future of the tool.

As Mike is right, this is not a scenario for a SR,  I closed mine immediately (Slide 44), for sure, there is no issue with a non-production-use of the tool. Only 2 or 3 days (!) later (now speaking with the development directly) I got a newer pre-version from Mike personally! So I was able to go forward with my tests.

The autoupgrade.jar tool then stepped over the GRP problem but was running into a plug-in violation (Slides 45/46). The, again, very good logfiles recorded an error, that the description.xml could not be read. The description.xml file did exist, so was it another non-production-use thing?
Development helped me analysing it and so we found out together that the problem seems to be related to the database version 19c (on Windows). If we put the description.xml into ORACLE_HOME/database the file can be found, but not at it's original place (Slide 47).
While the one thing is database bound, the fixes for the GRP will for sure be seen in the December or January release of the tool.


Edit (December, 2nd): What happened after the DOAG conference: With development feedback and some long testing runs over the weekend I have figured out, where the problem starts (it's a Windows/Oracle Database rights issue). I am waiting for the developers assessment on that, maybe I write another, different post, which is only a little bit related to autoupgrade.jar as the problem may hit you at every MS Windows environment.

Edit (May 14th, 2020): The newest version of autoupgrade.jar does now support Non-CDB to PDB conversion officially. I have not tested that yet.

The next post for my DOAG session can be found here.

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 3 - Deploy mode

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

Most of the tests I did in the deploy mode, because I used snapshots (instead of backups) a lot while testing, so it was (also with SE2) easy to revert to the last state. Snapshots have been my best friends as I didn't want to wait for the restore/recovery...

As I said, I was sometimes a little bit confused that with "lsj" e.g. status showed "finished" while message said "starting" (Slide 26), so I preferred the detailed
status -job <jobno> to look at the status of the job.
There are a lot details you can see, e.g. the operation, what is already done, how many stages are pending, etc., the status command at a database with CDB architecture also shows all PDBs (including seed) which are in touch by the tool (Slide 27 and Slide 28).

As the first tests without archivelog mode succeeded, I tried to use "restoration=YES" in my configuration. This means, you use a guaranteed restore point (GRP, Enterprise Edition only) where you can roll back the database to, if something goes wrong.

For using GRP you need a Fast Recovery Area (FRA). Most of our customers don't use FRA by default, so this is something we need to introduce in our case, exclusively and only for the time of the upgrade.

Testing with GRP, after some minutes, I have seen with another "status" command, that there is a warning. The recovery area is nearly full and the upgrade could "appear hung" if the recovery area is filled up (Slide 29).
To find out, what happens, if the FRA is full, I started later on another autoupgrade run with a far too small FRA (Slide 30).
Thankfully, the tool did not hung, but throwed an unexpected exception error. In the very detailed logs (it's more a debug trace than a normal logfile) you can easily find the error: "MIN_RECOVERY_AREA_SIZE" is to small.

Fixing that can be done online with "alter system", but only with scope=MEMORY. The database is restarted a couple of times with a pfile while it runs through the deploy mode. (Slide 31)
You can either wait for this error happen more than once or, what I did, change the different pfiles, e.g. "during_upgrade_pfile_catctl.ora" (which is located at the dbupgrade subdirectory of your job) in addition. After resuming the job (Slide 33), the autoupgrade finished fine (Slides 34, 35, 38).

As the size of the FRA can be critical, it's good to set it big enough before you begin with the deploy mode. You could, e.g. use the "before_action" parameter in your configuration to change the FRA size or you can use add_during_upgrade_pfile, for example. (Slide 32)

While the tool is running, you can see with the status command that it's e.g. upgrading CDB$ROOT, upgrading PDB$SEED or your user PDBs (in parallel) (Slides 34/35).
There is also a "tasks" command (Slide 36) where you can see all the tasks that are running or waiting, but I didn't see anything there I think I would need at my daily work.

If you run the autoupgrade.jar on MS Windows, a temporary service in the target Oracle Home is created (Slide 37), dropped afterwards and you have to drop the old original service and create the new one manually, that's fine.
When the upgrade has finished, don't forget to drop the GRP by yourself, otherwise the FRA will be filled up completely (you could set a parameter in your configuration to do this automatically, but I prefer to test the upgraded environment first) (Slide 38).

On MS Windows I had an unexpected error again on my german environment as stopping the service for the database to be upgraded was successful, but the tool was raising an error (Slide 40). I have opened another SR for that and I think it will be fixed soon (there is a bug open for that). After installing the US language pack and changing the Oracle user account to "US" language, everything worked fine.

The next post will be about the Non-CDB to PDB conversion with the autoupgrade tool.

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 2 - Pre-Upgrade-Analysis

This is the second post to the first experiences with the Autoupgrade tool. Please start with the first one.

You start the pre-grade analysis with the parameters -config <yourconfigfile> -mode analyze in console mode (Slide 20).

With "lsj" you can see the job number and some information, like stage, operation, status and message. As I was not able to understand all the different status combinations at the output, I started to prefer to select only the job number and issue a
"status -job <jobno>" to get the detailed information regarding a job.
The analysis job finishes after some minutes (1 to 3 depending on my virtualbox on the laptop or the VMWare server) (Slide 21).

At your autoupgrade log directory, you then get a number of files, the two I want to mention here are the preupgrade.html file and the checklist.cfg file (Slide 22).

As not all servers do have a web browser installed (mostly on linux there isn't a GUI), you really should copy the html file to your desktop!

There is soooo many information in this html and you should treat all messages serious, independend if they are "errors" or only "warnings", "recommended" or "infos" (Slides 23-25).

What you have to look at is what is the topic that raises you a message and if there is a fixup available. Fixup available yes means, the autoupgrade tool can correct this topic before (or after) the database is/was upgraded automatically by itself.

For our "customer-like" environment we had e.g. things in the dba recyclebin. This can be fixed automatically, but as DBA I like to prefer to have a look into the recycle bin first.Maybe there is something in there we already need?

Another topic found was 10G password version users are existing - they can't connect to the 19c database later on! Will be really bad if you don't fix that by yourself.
The "streams configuration" found in the database wasn't used, so I dropped that manually (the html shows you the DBMS-Packages you need as output).

Again, check all the things you see at the html carefully! Then decide to let it be fixed automatically (if there is a fixup available), to change the configuration or to ignore the issue (like ignoring some invalid procedures in our application schema depending on the type of software in use).

If you want to ignore a fix, you can tweak the checklist.cfg file by changing runfix from YES to NO. Keep in mind to NOT run the autoupgrade tool afterwards with the "deploy" parameter. You need to run the fixup and the upgrade modes manually.
Deploy creates a new pre-analysis, fixes things and upgrades the database in another job and does not know of things you may have changed in the older checklist.cfg which belongs to an old, finished job.

Another topic, we often see: Some of us do have underscore parameters at the spfile and the tool detects this and says, there is no fixup available. This is true, but there is a workaround.
If you want to get rid of all underscore parameters you can set a global parameter (remove_underscore_parameters=yes) and you can add the once you really need later with the add_to_pfile-files you can specify. Otherwise you can ignore this and proceed with the fixups or with the deploy mode (the underscore parameter are still in your spfile after the upgrade).

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 1 - introduction and configuration

At the DOAG conference 2019 I had a presentation (due to the filled up room it was repeated the day after) regarding the Oracle Autoupgrade tool. The slides are in german and can be found at my Slide Download Center. As most slides are screenshots only, I will write in this series of posts a little bit about this new tool.

The first part will now cover an introduction and how to setup the configuration file.

Introduction:
After OOW 2018 I have heard the first time of the Autoupgrade Tool and I was very curious. I have downloaded the slides and what I have found was a thing I thought it should be there since years.
A tool which helps you upgrading your databases from one release to another without running all these scripts, pre-installation.jars and DBUA manually. With less hands on and with a lot of things automated. The more I have read the more curious I got.

Later on, I have seen autoupgrade as part of the documentation of Oracle 19c, but I wasn't really able to see how it could work and I also had no time to step into it in detail.

Then, Mid of June 2019, I have seen Mike Dietrichs (Mister Upgrade Guru) blog post about how to create a sample configuration file and with how less parameters an upgrade could be started.
So I decided (after a short conversation with Mike on twitter, June 13th) to test the tool and to send a proposal for a presentation at the DOAG conference. I already had installed a 19c Database software on my laptop with Windows 10, so I decided to make the first small test there.

Unfortunately, I've got a java exception at running autoupgrade.jar with a -version flag. Also the downloaded version from support.oracle.com had this problem. (Slide 11) It's so typical, that it hits me...😳

So I was really sorry to inform Mike that I couldn't test that due to (in my opinion) a problem with the german environment I had. I opened a SR to pass all the logs to the Oracle Support and Mikes Team.
July 1st (! I started testing mid of June! - Slide 13) I've got an email from Mike that the bug should be fixed in the version they have uploaded some days before. The fix included a "fallback" to default language (english) if the OS language is not supported. I downloaded that version and yes, it was working now. (Slide 15)

I asked my colleague to set up a "Swiss customer similar database environment" on a Windows 2012 server and I prepared a virtualbox with Oracle Linux and some CDBs.
As I was busy over the summer I only were able to test things in my spare time mostly. So I've started with reading the documentation and Mikes blog.
The more I read, the more I have seen some targets, that my employer could reach by using this tool (Slide 7):

As an ISV and System Integrator, we have a lot of databases to upgrade (most of them from 12.1 to 19c) next year. Some on Linux, some on Windows, some on ODA (but ODA is out of scope here).
The database editions are Standard Edition 2 and Enterprise Edition and we have all kind of licenses (Embedded, Application Specific and Full Use).
Main target for us is to get rid of any manual work as this is most error-prone, e.g. changing SPFile parameters, switching from Non-CDB to the new CDB architecture, maybe combining application upgrades with database upgrades (scripted), etc..

After the virtualbox and the Windows environments (on VMWare) have been installed, I was ready to start with the tests.

Preparation:

As a lot servers don't have (the right) java (version) installed, it's best to use the java from the ORACLE_HOME/jdk/bin directory of 18c or 19c (Slide 15). This works fine. Also, as release cycles of the autoupgrade tool are very short, you should download always the newest autoupgrade.jar from support.oracle.com. Even if you have checked the note a week ago, check again, before you start your work.

With that in mind, you can run the tool with the -create_sample_file config parameter (by the way, on my german environment it says unsupported language, defaulting to English) (Slide 16).

It's very easy to use the sample config file (Slide 17) to create your own configuration. A small thing that can be found on Windows is the global log directory misses "backslashes". This does not work out of the box but is easy to fix.On Linux, the default log directory is fine and points to /home/oracle. (Slide 18)

Next to do is to specify (Slide 19) e.g. the DB_UNIQUE_ID as "DBNAME", the "SID", "SOURCE HOME" and "TARGET HOME", "TARGET_VERSION" of the database and, if you wish so, even more (see the documentation for all parameters). The "START_TIME" parameter can be set to 'now' or a date/time - the format can be found in the documentation (no NLS_LANG setting). With "add_after_upgrade_pfile" (or delete or before or during instead of after) you can change automatically the SPFILE content.

One of the things I really like is to change is the tag "UPG1." to a tag that gives more information (e.g. EDM for our Energy Data Management DB, DAPHNE for the museums DB, etc.) (Slide 19).

So you can create one config file for all kind of databases and they are pretty named. Running "UTLRP" for recompilation is a nice thing, also have a look at the parameter for "TIMEZONE_UPG"rades as it is often forgotten in manual upgrades. You should also know the "RESTORATION" parameter, more information on this parameter will follow later.

If you have setup the config file, you are ready to start to work with your database(s).


The next step is the Pre-Upgrade analysis (next post).