Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 5 - DBA still matters...

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

As every other DBA, I am not error free. Any good administrator is doing things wrong sometimes, this is normal, especially if you do a lot stuff in a very short time and you are very busy.

Well, to be honest, things happened also because I made some mistakes. E.g. at one of my tests (with GRP) I didn't used an administrator command for running the autoupgrade tool in MS Windows. MS Windows was quitting that with an OpenSCManager failed, Access denied error. (Slide 48).
As I tried to use "restore" in the console, Windows again throwed ne the error. What should I do now? I am lost in the middle of nowhere (ok, I could have went back to my VM snapshot, but I wanted to test the tool).
I exited the console (it was warning me that 1 job will stop) (Slide 49) and started the autoupgrade tool again in a administrator command window.

It attached automatically to the old job and I started a restore again. I don't know if there was something still in an undefined state, but first nothing happens.
Starting the "restore" a second time was successful. I have watched the job status saying it's flashing back parallel with the alert.log and after a while it was back on the pre-upgrade state (Slide 50-54).
That's a very robust behaviour, even if the first restore seemed to do nothing (I think it was able to fix a broken state, so the second execution could succeed).

A second error I did was arising while using the "add_after_upgrade_pfile" parameter (Slide 55). I had a stupid spelling mistake in that file. So autoupgrade finished, but the database was not started.
A look at the detailled log file and I have seen that the pfile wasn't copied / changed to spfile due to this error. I then copied the corresponding init.ora to the dbs/database subdirectory, created a spfile out of it and everything was fine. No need to roll back using the GRP...

If you are on MS Windows, one last thing happened while creating the Windows service in the new Oracle Home with oradim. I was lazy and just copied the whole command from the preupgrade.html.
The DIM-00003 error I got doing this is misleading.
There is no missing parameter - you can't specify the spfiles name after the "-spfile" switch (Slide 57).

My personal summary (Slide 58):

Always, always, always download the newest version of the autoupgrade-tool. As it evolves and improves fast, check again often, if there is a new version of it.

Well, "everything automatic" is something relative. You can still run in some hassles (as I did on MS Windows), you maybe try things which are non-production-use and so on - not only at the autoupgrade-tool things can happen, but also at the database level or because you as administrator are doing something wrong (like the most of my examples show). By the way, there was a whole session at the DOAG conference what can happen (from a database perspective) during an upgrade (and how you can find/fix the errors).

The tool is hands free, yes, but personally I would prefer to keep an eye on it while using it in production the first times. As always: If you test your configuration well in your environment and everything works there, then it works also in production! This is, where I used the VM snapshots in my test environments a lot to test as much as I can in a short timeframe.

What I was really impressed about is, how stable and robust a tool is, that is not that long on the market, how good resume and restore are working and, again, how detailled the logs are.

As we do have a lot customers at Switzerland running Windows with (Swiss) German or (Swiss) French language, we need to test the newer versions of the autoupgrade tool. 
Linux environments of our customers are all in english and also don't have the MS Windows service problems, so it's easier for us to use the tool there. If you have more than 1 or 2 databases and these are similiar (especially the environment), it does really make sense to step into using the tool.

Hope you enjoyed the series. I will make some more tests next year. By the way, Mike Dietrich told at his session, that autoupgrade.jar will be the tool for upgrading exadata databases in the future!


Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 4 - Non-CDB to PDB conversion after upgrade

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

To convert a non-CDB to a PDB automatically after the upgrade, you can set "target_cdb" (first create an empty CDB or use an already existing one) in your configuration file.

While testing that, I was getting an error that the guaranteed restore point couldn't be dropped. But it wasn't existing, as I have set "restoration=no" in my configuration (Slide 42, 43). Well, I opened another SR that this step fails on Windows (on Linux I only had already CDBs, therefore I couldn't test that there). Mike Dietrich than told me that this feature is shown on his blog, but it does not exists in the documentation. As it is not official released yet, it should show the future of the tool.

As Mike is right, this is not a scenario for a SR,  I closed mine immediately (Slide 44), for sure, there is no issue with a non-production-use of the tool. Only 2 or 3 days (!) later (now speaking with the development directly) I got a newer pre-version from Mike personally! So I was able to go forward with my tests.

The autoupgrade.jar tool then stepped over the GRP problem but was running into a plug-in violation (Slides 45/46). The, again, very good logfiles recorded an error, that the description.xml could not be read. The description.xml file did exist, so was it another non-production-use thing?
Development helped me analysing it and so we found out together that the problem seems to be related to the database version 19c (on Windows). If we put the description.xml into ORACLE_HOME/database the file can be found, but not at it's original place (Slide 47).
While the one thing is database bound, the fixes for the GRP will for sure be seen in the December or January release of the tool.


Edit (December, 2nd): What happened after the DOAG conference: With development feedback and some long testing runs over the weekend I have figured out, where the problem starts (it's a Windows/Oracle Database rights issue). I am waiting for the developers assessment on that, maybe I write another, different post, which is only a little bit related to autoupgrade.jar as the problem may hit you at every MS Windows environment.

Edit (May 14th, 2020): The newest version of autoupgrade.jar does now support Non-CDB to PDB conversion officially. I have not tested that yet.

The next post for my DOAG session can be found here.

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 3 - Deploy mode

As this post is a part of a series of posts regarding Autoupgrade-tool, you might want to start with the first (configuration) or the second post (pre-upgrade analysis).

Most of the tests I did in the deploy mode, because I used snapshots (instead of backups) a lot while testing, so it was (also with SE2) easy to revert to the last state. Snapshots have been my best friends as I didn't want to wait for the restore/recovery...

As I said, I was sometimes a little bit confused that with "lsj" e.g. status showed "finished" while message said "starting" (Slide 26), so I preferred the detailed
status -job <jobno> to look at the status of the job.
There are a lot details you can see, e.g. the operation, what is already done, how many stages are pending, etc., the status command at a database with CDB architecture also shows all PDBs (including seed) which are in touch by the tool (Slide 27 and Slide 28).

As the first tests without archivelog mode succeeded, I tried to use "restoration=YES" in my configuration. This means, you use a guaranteed restore point (GRP, Enterprise Edition only) where you can roll back the database to, if something goes wrong.

For using GRP you need a Fast Recovery Area (FRA). Most of our customers don't use FRA by default, so this is something we need to introduce in our case, exclusively and only for the time of the upgrade.

Testing with GRP, after some minutes, I have seen with another "status" command, that there is a warning. The recovery area is nearly full and the upgrade could "appear hung" if the recovery area is filled up (Slide 29).
To find out, what happens, if the FRA is full, I started later on another autoupgrade run with a far too small FRA (Slide 30).
Thankfully, the tool did not hung, but throwed an unexpected exception error. In the very detailed logs (it's more a debug trace than a normal logfile) you can easily find the error: "MIN_RECOVERY_AREA_SIZE" is to small.

Fixing that can be done online with "alter system", but only with scope=MEMORY. The database is restarted a couple of times with a pfile while it runs through the deploy mode. (Slide 31)
You can either wait for this error happen more than once or, what I did, change the different pfiles, e.g. "during_upgrade_pfile_catctl.ora" (which is located at the dbupgrade subdirectory of your job) in addition. After resuming the job (Slide 33), the autoupgrade finished fine (Slides 34, 35, 38).

As the size of the FRA can be critical, it's good to set it big enough before you begin with the deploy mode. You could, e.g. use the "before_action" parameter in your configuration to change the FRA size or you can use add_during_upgrade_pfile, for example. (Slide 32)

While the tool is running, you can see with the status command that it's e.g. upgrading CDB$ROOT, upgrading PDB$SEED or your user PDBs (in parallel) (Slides 34/35).
There is also a "tasks" command (Slide 36) where you can see all the tasks that are running or waiting, but I didn't see anything there I think I would need at my daily work.

If you run the autoupgrade.jar on MS Windows, a temporary service in the target Oracle Home is created (Slide 37), dropped afterwards and you have to drop the old original service and create the new one manually, that's fine.
When the upgrade has finished, don't forget to drop the GRP by yourself, otherwise the FRA will be filled up completely (you could set a parameter in your configuration to do this automatically, but I prefer to test the upgraded environment first) (Slide 38).

On MS Windows I had an unexpected error again on my german environment as stopping the service for the database to be upgraded was successful, but the tool was raising an error (Slide 40). I have opened another SR for that and I think it will be fixed soon (there is a bug open for that). After installing the US language pack and changing the Oracle user account to "US" language, everything worked fine.

The next post will be about the Non-CDB to PDB conversion with the autoupgrade tool.

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 2 - Pre-Upgrade-Analysis

This is the second post to the first experiences with the Autoupgrade tool. Please start with the first one.

You start the pre-grade analysis with the parameters -config <yourconfigfile> -mode analyze in console mode (Slide 20).

With "lsj" you can see the job number and some information, like stage, operation, status and message. As I was not able to understand all the different status combinations at the output, I started to prefer to select only the job number and issue a
"status -job <jobno>" to get the detailed information regarding a job.
The analysis job finishes after some minutes (1 to 3 depending on my virtualbox on the laptop or the VMWare server) (Slide 21).

At your autoupgrade log directory, you then get a number of files, the two I want to mention here are the preupgrade.html file and the checklist.cfg file (Slide 22).

As not all servers do have a web browser installed (mostly on linux there isn't a GUI), you really should copy the html file to your desktop!

There is soooo many information in this html and you should treat all messages serious, independend if they are "errors" or only "warnings", "recommended" or "infos" (Slides 23-25).

What you have to look at is what is the topic that raises you a message and if there is a fixup available. Fixup available yes means, the autoupgrade tool can correct this topic before (or after) the database is/was upgraded automatically by itself.

For our "customer-like" environment we had e.g. things in the dba recyclebin. This can be fixed automatically, but as DBA I like to prefer to have a look into the recycle bin first.Maybe there is something in there we already need?

Another topic found was 10G password version users are existing - they can't connect to the 19c database later on! Will be really bad if you don't fix that by yourself.
The "streams configuration" found in the database wasn't used, so I dropped that manually (the html shows you the DBMS-Packages you need as output).

Again, check all the things you see at the html carefully! Then decide to let it be fixed automatically (if there is a fixup available), to change the configuration or to ignore the issue (like ignoring some invalid procedures in our application schema depending on the type of software in use).

If you want to ignore a fix, you can tweak the checklist.cfg file by changing runfix from YES to NO. Keep in mind to NOT run the autoupgrade tool afterwards with the "deploy" parameter. You need to run the fixup and the upgrade modes manually.
Deploy creates a new pre-analysis, fixes things and upgrades the database in another job and does not know of things you may have changed in the older checklist.cfg which belongs to an old, finished job.

Another topic, we often see: Some of us do have underscore parameters at the spfile and the tool detects this and says, there is no fixup available. This is true, but there is a workaround.
If you want to get rid of all underscore parameters you can set a global parameter (remove_underscore_parameters=yes) and you can add the once you really need later with the add_to_pfile-files you can specify. Otherwise you can ignore this and proceed with the fixups or with the deploy mode (the underscore parameter are still in your spfile after the upgrade).

Everything goes Auto? First experiences with the Oracle Autoupgrade-Tool - Part 1 - introduction and configuration

At the DOAG conference 2019 I had a presentation (due to the filled up room it was repeated the day after) regarding the Oracle Autoupgrade tool. The slides are in german and can be found at my Slide Download Center. As most slides are screenshots only, I will write in this series of posts a little bit about this new tool.

The first part will now cover an introduction and how to setup the configuration file.

Introduction:
After OOW 2018 I have heard the first time of the Autoupgrade Tool and I was very curious. I have downloaded the slides and what I have found was a thing I thought it should be there since years.
A tool which helps you upgrading your databases from one release to another without running all these scripts, pre-installation.jars and DBUA manually. With less hands on and with a lot of things automated. The more I have read the more curious I got.

Later on, I have seen autoupgrade as part of the documentation of Oracle 19c, but I wasn't really able to see how it could work and I also had no time to step into it in detail.

Then, Mid of June 2019, I have seen Mike Dietrichs (Mister Upgrade Guru) blog post about how to create a sample configuration file and with how less parameters an upgrade could be started.
So I decided (after a short conversation with Mike on twitter, June 13th) to test the tool and to send a proposal for a presentation at the DOAG conference. I already had installed a 19c Database software on my laptop with Windows 10, so I decided to make the first small test there.

Unfortunately, I've got a java exception at running autoupgrade.jar with a -version flag. Also the downloaded version from support.oracle.com had this problem. (Slide 11) It's so typical, that it hits me...😳

So I was really sorry to inform Mike that I couldn't test that due to (in my opinion) a problem with the german environment I had. I opened a SR to pass all the logs to the Oracle Support and Mikes Team.
July 1st (! I started testing mid of June! - Slide 13) I've got an email from Mike that the bug should be fixed in the version they have uploaded some days before. The fix included a "fallback" to default language (english) if the OS language is not supported. I downloaded that version and yes, it was working now. (Slide 15)

I asked my colleague to set up a "Swiss customer similar database environment" on a Windows 2012 server and I prepared a virtualbox with Oracle Linux and some CDBs.
As I was busy over the summer I only were able to test things in my spare time mostly. So I've started with reading the documentation and Mikes blog.
The more I read, the more I have seen some targets, that my employer could reach by using this tool (Slide 7):

As an ISV and System Integrator, we have a lot of databases to upgrade (most of them from 12.1 to 19c) next year. Some on Linux, some on Windows, some on ODA (but ODA is out of scope here).
The database editions are Standard Edition 2 and Enterprise Edition and we have all kind of licenses (Embedded, Application Specific and Full Use).
Main target for us is to get rid of any manual work as this is most error-prone, e.g. changing SPFile parameters, switching from Non-CDB to the new CDB architecture, maybe combining application upgrades with database upgrades (scripted), etc..

After the virtualbox and the Windows environments (on VMWare) have been installed, I was ready to start with the tests.

Preparation:

As a lot servers don't have (the right) java (version) installed, it's best to use the java from the ORACLE_HOME/jdk/bin directory of 18c or 19c (Slide 15). This works fine. Also, as release cycles of the autoupgrade tool are very short, you should download always the newest autoupgrade.jar from support.oracle.com. Even if you have checked the note a week ago, check again, before you start your work.

With that in mind, you can run the tool with the -create_sample_file config parameter (by the way, on my german environment it says unsupported language, defaulting to English) (Slide 16).

It's very easy to use the sample config file (Slide 17) to create your own configuration. A small thing that can be found on Windows is the global log directory misses "backslashes". This does not work out of the box but is easy to fix.On Linux, the default log directory is fine and points to /home/oracle. (Slide 18)

Next to do is to specify (Slide 19) e.g. the DB_UNIQUE_ID as "DBNAME", the "SID", "SOURCE HOME" and "TARGET HOME", "TARGET_VERSION" of the database and, if you wish so, even more (see the documentation for all parameters). The "START_TIME" parameter can be set to 'now' or a date/time - the format can be found in the documentation (no NLS_LANG setting). With "add_after_upgrade_pfile" (or delete or before or during instead of after) you can change automatically the SPFILE content.

One of the things I really like is to change is the tag "UPG1." to a tag that gives more information (e.g. EDM for our Energy Data Management DB, DAPHNE for the museums DB, etc.) (Slide 19).

So you can create one config file for all kind of databases and they are pretty named. Running "UTLRP" for recompilation is a nice thing, also have a look at the parameter for "TIMEZONE_UPG"rades as it is often forgotten in manual upgrades. You should also know the "RESTORATION" parameter, more information on this parameter will follow later.

If you have setup the config file, you are ready to start to work with your database(s).


The next step is the Pre-Upgrade analysis (next post).

Most Valued Contributor for Database and Engineered Systems / Company Blog

Well, some of you may have seen this new badge on the right side of the blog. I was announced by the My Oracle Support Communitiy Team as "Most Valued Contributor" for the community in the area of database and engineered systems for 2019. What does this mean?
Oracle says:
"The MVC award is a prestigious recognition awarded annually in the My Oracle Support Community to customer users who demonstrate an exceeding commitment to the ongoing success of their peers. These users go above and beyond to provide recognized correct and helpful responses to online requests for assistance, and they regularly, willingly, and selflessly share their hard-earned expertise and knowledge of Oracle products for the benefit and betterment of the wider community."


This is really an honour to be selected as one of nine people WORLDWIDE. Well, some of you may not know https://community.oracle.com as a separate website. You don't need to!

Most of the discussions can be found using a normal e.g. google search as the first lines of the posts are available without any Oracle login. You can also start with https://support.oracle.com as entry point and do a normal Knowledge Base search.
Then you do get two different results - one is the Knowledge Base Results, which is the official part of Oracle with bug notes, etc. - the other one is the Community Search Results.

That is the part, where people like me invest spare time to help others on a peer to peer base. Like with this blog, but based on direct questions/problems. You also get (sometimes/often) answers to e.g. releases which aren't under support (like XE or older versions). So, if you are looking for some help, try to search there or put in your own question.
If you want to create your own post with a question inside, you can do this either using the direct community web page, or you can go to the Technical Service Requests area of the Dashboard in support.oracle.com. If you press the "Ask in community" button, you get automatically routed to community.oracle.com. As I said, most people are answering there for free, with enthusiasm and in their spare time.

To get fast and helpful answers, ensure:
- First search for your topic, maybe it's answered already. If you find old (more than a year or two) posts which are still open/unanswered, don't ask there in a comment. Create your new, own post.
- If you write a new post, put in so many information as you can. Often questions are not answered, because there are things like edition, version,... missing.
- Don't say: You have to - we just do voluntary work for free, so be kind.
- Put your question in the right space and only once (you don't get any more help if you cross post things). Come back regularly to see, if someone has answered something or is asking for more input. If we ask some questions to understand YOUR problem and don't get any answer from the original poster... well...
- And don't forget: Mark your question if one of the answers helped you/is correct. So the next guy with the same question can see, there is a solution for his problem and does not need to open another post. If you find a solution by yourself, write it into a comment and close your original post. This also helps others!
- Last, but not least: Sometimes we need to say: Sorry, that is something for the Oracle Support, so you need to open a SR (which means, you need a support contract) and sometimes things are so complicated, that we need to say, we can't step further into a problem. Especially when it comes to reading traces, which could be something that needs days of time, we will not do that for free (but we will not ask you in the post blog for paid work, so if you want this, you need to ask us directly).

That's it, guys. 👌

Now to something different. As you can see, I do a lot things regarding Oracle and I like to help - and I am not alone. Therefore, if you do speak german, please follow the blog of my employer Robotron (I will also write there - with some colleagues). You will get a lot of information not only about Oracle, but also about other technologies. You can subscribe e.g. to the Robotron Blog (RSS) or just visit it from time to time at https://www.robotron.de/unternehmen/aktuelles/blog/.

Oracle Database Autoupgrade Tool throws Java Error on MS Windows

Hi,

building up knowledge often starts with trying new things. One of the things I am currently playing around with is the new autoupgrade tool provided by Oracle.
If you start with this tool, I would suggest you to consult Mikes blog.
Mike Dietrich has some blog posts (and more are coming) regarding autoupgrade.jar, how to use it, etc..

As I am often trying things early, I also often run in problems who can be easily avoided, if you know, what you need to do.
These days, I was struggling with running the autoupgrade.jar on my MS Windows 10 notebook.
I have downloaded the latest 19c (19.3.0) database from Oracle, I also have downloaded the last autoupgrade.jar from the support note

Running java -jar showed me, that I have the right Java 8 executable, but trying to run the autoupgrade.jar with

java -jar c:\app\oracle\product\19.3.0\rdbms\admin\autoupgrade.jar -version 

returned a wonderful java runtime error stack:

Exception in thread "main" java.lang.ExceptionInInitializerError
at oracle.upgrade.autoupgrade.boot.AutoUpgMain.<init>(AutoUpgMain.java:114)
at oracle.upgrade.autoupgrade.boot.AutoUpgMain.newInstance(AutoUpgMain.java:142)
at oracle.upgrade.autoupgrade.boot.Boot.main(Boot.java:39)
Caused by: java.lang.NullPointerException
at java.io.Reader.<init>(Unknown Source)
at java.io.InputStreamReader.<init>(Unknown Source)
at oracle.upgrade.commons.lang.LangSettings.resourceBundleFromFile(LangSettings.java:213)
at oracle.upgrade.commons.lang.LangSettings.loadLanguageInfo(LangSettings.java:71)
at oracle.upgrade.commons.lang.LangSettings.<init>(LangSettings.java:64)
at oracle.upgrade.commons.lang.LangSettings.<init>(LangSettings.java:54)
at oracle.upgrade.commons.context.AppContext.<clinit>(AppContext.java:47)
... 3 more 


As I have read out of the java error stack, it seems there is a problem with some language settings... Well, as an Oracle dinosaur I often have seen things in Oracle, which sometimes only worked with english/american/US settings. You can guess the language of my Windows system?
It's german!

So I tried to set NLS_LANG to american, but unfortunately that didn't helped. Afterwards, I opened a Service Request and sent that also to Mike, because we were already communicating about that.

Some days later Mike contacted me again and told me, there is a new autoupgrade.jar (published end of June) available at metalink and there is a bug fix that says, it fails back to english if a language isn't found. Maybe this could fix the behaviour of the tool on my machine also.

I then downloaded the autoupgrade.jar again from metalink and tested it. Today I am now able to start with the tests of the autoupgrade tool, because it doesn't throw an error anymore:

C:\app\oracle\product\19.3.0\rdbms\admin>java -jar autoupgrade.jar -create_sample_file config
Unsupported language [de], defaulting to English


So, if you are running in a java problem with autoupgrade.jar, don't waste time, download the newest version here and try it again. As it is a relatively new tool, there may be still some more errors in, but Oracle is fixing them and they do release new versions still often. Don't base your tests on the autoupgrade.jar that is delivered with the Oracle database downloads, neither for Windows, nor for Linux/Unix.


Slide Downloads

I have created a new sub-page at my blog where I have started to upload or linked to my presentations you can download e.g. from the DOAG events I have spoken at.
You can find them at the "Slide Download Center" at the right side of the page or you can follow this link.

Next public events where you can meet me

Hi,

more events to come - here you can find me speaking in the next months (unfortunately, all presentations will be held in german language):
  •  The first presentation is at the 15th Robotron Business Cafe at Dübendorf where I will speak about operating applications in clouds.
  • In May (14th) you can find me at the Oracle Fokustag Datenbank of Robotrons HQ at Dresden, there I will present Oracle Database 18c/19c New Features (including XE) and what a DBA need to prepare to run 19c as a long-term release.
  • The next event is the big Swiss Oracle User Group Day which takes part at May, 22nd at Olten. I will speak there again about 19c New Features. You can find me at Track 5 in the morning (10.30h) at the Fachhochschule Nordwestschweiz at Olten
  • The last event in May (28th at Stade de Suisse) is some kind of after work beer. Not a meetup, but an event regarding PostgreSQL and Oracle Database Appliance (ODA). Yes, you can consolidate also open source databases together with e.g. Standard Edition 2 or Enterprise Edition databases on the Oracle Database Appliances (most companies do have free resources at the ODAs they own). We will tell you how you can run PostgreSQL on ODA, which nice features you can use and what you might consider running open source databases on ODA.
So there are enough events where you can meet me - no excuse if you miss ALL of them 😉

And by the way - HAPPY EASTER TO EVERYONE!

Creating KVM network bridge on ODA - Not able to connect to database anymore.

A lot of people are using Oracles KVM solution on ODA (Oracle Database Appliance). My company e.g. runs something like application servers in a Linux VM on the ODA lite models for our own software solutions (like communication server, etc.) if a customer runs ODAs (we call it "Solution-in-a-Box"). But there are also other customers, where we just act as system integrator, who want to use KVM on the ODA.

There is a really nice blog how to enable and use KVM on ODA, the starting point is this blog post by Tammy: kvm-on-oda.

It is straight forward, but one of the things are a little bit crucial. It is how to make the network configuration for the KVM on ODA. The best solution is to BRIDGE the network. Ruggero has written a blog post as part of Tammys blog, how you can enable all the different network types.
Don't use NAT or MacVTap - just follow the configuration steps for "Bridged networking (aka "shared physical device")".

Be sure you have access to the iloms host redirection function, because if there is any misconfiguration with the bridge, you lose the network connectivity und you are not able to connect internal (like you could do with ODA HA).

If you have followed the configuration steps and if you can connect to the ODA host again with the bridge configuration, you are not able to connect to the database(s) on that host anymore. Why? Because one mandatory step is missing at Ruggeros example: The configuration of the clusterware (as grid user)! Some tried to stop and start the listener, but the listener will not be started and errors out.

What you need to do as a last configuration step is to modify your clusterware network configuration. The listener is bound to Network 1, which can easily seen by issueing

$ srvctl config listener
Name: LISTENER
Type: Database Listener
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

To check the configuration, run the following command:
$ srvctl config network
Network 1 exists
Subnet IPv4: 10.214.0.0/255.255.248.0/btbond1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

As you can see, the network is still configured to btbond1 instead of pubbr0.
The syntax (I use the same subnet in my example as Ruggero in his blog) to change this is:
$ srvctl modify network -netnum 1 -subnet 10.214.0.0/255.255.248.0/pubbr0 

Now it is best to either restart the ODA to check if everything is running fine also after an reboot or at least to restart the listener.
$ srvctl stop listener
$ srvctl start listener

Then you are done folks, you can use the bridged device now for your KVM and the databases are reachable again.





Some more presentations... part 2 (with some corrections)

Hi all,

I have some date changes to tell regarding the post about where I do speak next about Oracle stuff.
So the new (and I hope fixed) dates in the next months are:

  • New: 07.05.2019 Robotron BusinessCafe at Duebendorf. I will speak a little bit about Oracle Cloud and (maybe) in addition a little bit about DBSAT (Database Security).
     
  • CHANGED: Then, in May (14th), I will speak at the Oracle Fokustag of Robotrons HQ at Dresden, also about 18c/19c New Features (including XE) and what a DBA need to prepare to run 19c as a long-term release.

    Agenda und Anmeldung (bald möglich): Fokustag Datenbanken in Dresden
  • And last, but not least, the big Swiss Oracle User Group Day takes part at May, 22nd at Olten. I will speak there again about 19c New Features.

Some more presentations...

Hi all,

so many new things, so many testing, so many work to do at the moment.
But despite the fact I am testing Oracle Autonomous TP Cloud and PostgreSQL (last week I have finished the tests with pgBackRest as Robotron Standard Backup tool for our PostgreSQL customers) I am planning for some more presentations at events at Switzerland and Germany.

At the moment the following events are confirmed, unfortunately, for the english speaking people, they are all in german language.

  • 26.03.2019, Oracle Fokustag at Robotron Schweiz. I will speak about 19c New Features and a little bit about the new version of DBSAT (Database Security Assesment Tool), I think. 
  • 9./10.04.2019, Oracle Database Appliance Events at Stade de Suisse, Bern, and at Oracles Smart Innovation Center at the Prime Tower Zurich. Main focus there is how the ODA can help you on premise to find your way to the cloud and why Standard Edition 2 fits perfect to Oracles Database Appliance. 
  • Then, in May (7th), I will speak at the Oracle Fokustag of Robotrons HQ at Dresden, also about 18c/19c New Features and what a DBA need to prepare to run 19c as a long-term release.
  • And last, but not least, the big Swiss Oracle User Group Day takes part at May, 22nd at Olten. I will speak there again about 19c New Features. 

Hope to see you at one or more events!

Guest Addition Installation "hangs" on Oracle VirtualBox after Upgrade from 5.2.22 to 6.0.4

I like running Oracles VirtualBox on my laptop for tests and demos. It is very easy to create new virtual machines, make snapshots, do test, roll back and forward to different stages.

In my holidays, Oracle now have released a new version of Virtualbox - version 6. As I was running 5.2.22 on my laptop, I downloaded the newest version (6.0.4) and installed it on my machine. The VMs started like they have done before, but as I do need the guest additions (for shared folders, etc.) I needed to upgrade them also.

When you start a VM, the VirtualBox software automatically checks for the existence of an older guest additions version and asks you, if you want to download/update to the newest version. You should do so - but in my case, after the download and the initial installation was done, the Run failed.
I was logged into my VM as root and the VBox Guest Addition iso was mounted automatically. Then it asked me to start the installation:


As I trusted this, I just started the installation by pressing the "Run" button. But the installation started a new terminal session and then it was hanging after some seconds at "Removing installed version of VirtualBox Guest Additions".


This shouldn't happen as it normally a very quick task. I tried to restart the VM, but it still stopped at the removing part of the Guest Additions.

I don't know, why this happened, but I was able to solve it by opening a terminal window as root and starting "autorun.sh" manually:




Doing this, the old Guest Additions were removed in some seconds and the new guest addition kernel was installed afterwards successfully (building was done in some minutes).


After restarting the VM, I was able to use the shared folders again.

So, if you have a problem with the Guest Additions after upgrading your VirtualBox software, try to start the Guest Addition installation from inside a root terminal window instead of using the normal autorun feature (even if you are logged in as root).

Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Relocate a PDB on-the-fly

Two parts of my three part blog posts are already available, so this is the third and (at the moment) last post about Single Tenancy and what you can do easily with pluggable databaseses using database links.

What you do need to use the scripts as they are can be found at my first blog entry.

Case 3 - Relocate a PDB on the fly:

There are different use cases for relocating a pdb on the fly, the most common are to move a pdb to another hardware or to the/another cloud infrastructure.
As long as the "copy" is not opened read-write, the old PDB stays where it is. So you can, e.g., create a remote_listener entry for the pdb at the new environment and let it point to the listener at the environment you want to move from. If you do so, the remote listener (at the original location) will work as some kind of proxy to the listener at the new environment. The new PDB then can be used, e.g. by an application server, without any changes at the tns/jdbc connect. Be aware, that the service name must be the same on both sides for the relocated PDB and that you need to change the application connect to the new location somewhere later. If you forget this and you start decommissioning the old environment, the application and users can not work with the moved pdb.

How do you have moved a database with the "good old times" tools? Export and import? Well, this means you get a nice, reorganised database, but this isn't a fast approach if you look at downtime. Creating a data guard environment or using any replication tools is sometimes not possible due to license issues or costs (Standard Edition, Golden Gate, Quest Shareplex, ...). Maybe you can clone the database using rman and recover the new environment manually, but this is a lot work and you may make a small mistake somewhere and the environment isn't recoverable. So the easiest and last chance is to shutdown the production and move everything (using compressing tools or maybe not) to the new environment. And the copy takes time and all the time the database is down...

What would you say to the following solution in short:
- Create a database link between two databases
- Enter one statement to move the database in the backgroud and let it be mounted (so you can work with the original production).
- Open the database at the new location (which means it is a last time synchronized with the original PDB and after that the original PDB is deleted automatically)

Sounds easys? IT IS that easy!

What are the steps you need to do in detail?

  1. Create a common user and grant him rights (SYSOPER!), (let) open the PDB you want to move to another host/to the cloud
  2. Create a database link to the CDB$ROOT (!) you want to move from the CDB$ROOT you want the PDB moved to.
  3. Create the pdb with RELOCATE at the second CDB.
  4. Open the new PDB read write (this step synchronizes the PDBs and drops the old one afterwards).
Script to run, connects first to the source, prepares the user, second to the target CDB, prepares the database link and relocates the database.

Again one last comment: I didn't tested the scripts as I have thrown things out. If something does not work, please come back and leave a comment, so I can fix it.
Thanks a lot!





Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Remote snapshot clone

Some people may wait for the second part of my three blog posts I am writing about Single Tenancy and why (some / a lot / most) people do think it is unnecessary complexity or - even worser - it is unnecessary at all.

What you do need to use the scripts as they are can be found at my first blog entry.

Case 2 - Remote snapshot clone of a PDB:

Snapshoting databases using a database link does have a lot of use cases.
The following two of them I do like most:
"Fast Provisioning", e.g. of nightly builds to testers or developers and application upgrades.

For remote snapshot pdb clones you either use file systems with snapshot technology included (like ACFS or ZFS) or you use traditional file systems (then you need to set CLONE_DB=true at your spfile and restart the database).

Well, how do you provision a nightly build today in 5 minutes to your testers and developers? On 40 machines? RMAN clone? Datapump export and import? What about the disk space you need for that? Like to spend money on it? Then tell me which shares I should buy before you throw the money out of the window. 😉

Or how do you make an application upgrade? With a non CDB database you can use offline backup the database, online backup but note down the last SCN, configure flashback database, ... And how long does it need to return to the before-upgrade-image if something goes wrong with the application upgrade?

Guys, years ago I was an app consultant and I had to go back with an installation of an application three times (at the same customer), because the business people did find some issues they haven't seen (tested) on the test system before. It took me hours and a lot nerves to restore everything to the right point-in-time and to setup the standby environments later on again...And if you do this three times in 4 months... 😟

So, the trick you will use for all of these uses cases is in the future - remote snapshot clone. Fast, reliable, you don't need to touch the "production", because all changes are done in your "new" snapshoted PDB.

What are the steps you need to do?



  1. Create a common user and grant him rights, open the PDB you want to make a snapshot from read only (if you don't use a snapshot capable file system);
  2. Create a database link to the PDB you want to clone from the CDB$ROOT you want the PDB cloned to.
  3. Create the snapshot clone remote at the second CDB.
  4. Open the new PDB read write
  5. Make your application upgrade or develop or test
    1. if it is sucessful / if you need to keep the PDB - "alter pluggable database <xyz> materialize;"
    2. if it is not successful or before you create another nightly build - drop the new PDB and open the original source PDB read write again
Script 1 to run at the source CDB$ROOT container prepares the source PDB to be snapshoted.
Script 2 to run at the target CDB$ROOT container where the snapshot clone should be created

Some notes to point 6: If you have dropped the snapshot clone PDB the source PDB can't be open read write. The error message you get is not telling you the truth, because the PDB files are on OS level also set to READ. So what you do need to do is to change the permission on OS level for the database files. So the right command will look like

sudo chmod 640 /u01/app/oracle/oradata/XFEE1/77F0ADE89C820516E0534738A8C0802B/datafile/*

After you have made these changes, the source PDB can be opened read write again. It's part of the script 2, but as the folder names are different on different databases, I have made a comment out of it.

One last comment: I didn't tested the scripts as I have thrown things out. If something does not work, please come back and leave a comment, so I can fix it.
Thanks a lot!


By the way - happy New Year 2019 everyone!