Slide Downloads

I have created a new sub-page at my blog where I have started to upload or linked to my presentations you can download e.g. from the DOAG events I have spoken at.
You can find them at the "Slide Download Center" at the right side of the page or you can follow this link.

Next public events where you can meet me

Hi,

more events to come - here you can find me speaking in the next months (unfortunately, all presentations will be held in german language):
  •  The first presentation is at the 15th Robotron Business Cafe at Dübendorf where I will speak about operating applications in clouds.
  • In May (14th) you can find me at the Oracle Fokustag Datenbank of Robotrons HQ at Dresden, there I will present Oracle Database 18c/19c New Features (including XE) and what a DBA need to prepare to run 19c as a long-term release.
  • The next event is the big Swiss Oracle User Group Day which takes part at May, 22nd at Olten. I will speak there again about 19c New Features. You can find me at Track 5 in the morning (10.30h) at the Fachhochschule Nordwestschweiz at Olten
  • The last event in May (28th at Stade de Suisse) is some kind of after work beer. Not a meetup, but an event regarding PostgreSQL and Oracle Database Appliance (ODA). Yes, you can consolidate also open source databases together with e.g. Standard Edition 2 or Enterprise Edition databases on the Oracle Database Appliances (most companies do have free resources at the ODAs they own). We will tell you how you can run PostgreSQL on ODA, which nice features you can use and what you might consider running open source databases on ODA.
So there are enough events where you can meet me - no excuse if you miss ALL of them 😉

And by the way - HAPPY EASTER TO EVERYONE!

Creating KVM network bridge on ODA - Not able to connect to database anymore.

A lot of people are using Oracles KVM solution on ODA (Oracle Database Appliance). My company e.g. runs something like application servers in a Linux VM on the ODA lite models for our own software solutions (like communication server, etc.) if a customer runs ODAs (we call it "Solution-in-a-Box"). But there are also other customers, where we just act as system integrator, who want to use KVM on the ODA.

There is a really nice blog how to enable and use KVM on ODA, the starting point is this blog post by Tammy: kvm-on-oda.

It is straight forward, but one of the things are a little bit crucial. It is how to make the network configuration for the KVM on ODA. The best solution is to BRIDGE the network. Ruggero has written a blog post as part of Tammys blog, how you can enable all the different network types.
Don't use NAT or MacVTap - just follow the configuration steps for "Bridged networking (aka "shared physical device")".

Be sure you have access to the iloms host redirection function, because if there is any misconfiguration with the bridge, you lose the network connectivity und you are not able to connect internal (like you could do with ODA HA).

If you have followed the configuration steps and if you can connect to the ODA host again with the bridge configuration, you are not able to connect to the database(s) on that host anymore. Why? Because one mandatory step is missing at Ruggeros example: The configuration of the clusterware (as grid user)! Some tried to stop and start the listener, but the listener will not be started and errors out.

What you need to do as a last configuration step is to modify your clusterware network configuration. The listener is bound to Network 1, which can easily seen by issueing

$ srvctl config listener
Name: LISTENER
Type: Database Listener
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

To check the configuration, run the following command:
$ srvctl config network
Network 1 exists
Subnet IPv4: 10.214.0.0/255.255.248.0/btbond1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

As you can see, the network is still configured to btbond1 instead of pubbr0.
The syntax (I use the same subnet in my example as Ruggero in his blog) to change this is:
$ srvctl modify network -netnum 1 -subnet 10.214.0.0/255.255.248.0/pubbr0 

Now it is best to either restart the ODA to check if everything is running fine also after an reboot or at least to restart the listener.
$ srvctl stop listener
$ srvctl start listener

Then you are done folks, you can use the bridged device now for your KVM and the databases are reachable again.





Some more presentations... part 2 (with some corrections)

Hi all,

I have some date changes to tell regarding the post about where I do speak next about Oracle stuff.
So the new (and I hope fixed) dates in the next months are:

  • New: 07.05.2019 Robotron BusinessCafe at Duebendorf. I will speak a little bit about Oracle Cloud and (maybe) in addition a little bit about DBSAT (Database Security).
     
  • CHANGED: Then, in May (14th), I will speak at the Oracle Fokustag of Robotrons HQ at Dresden, also about 18c/19c New Features (including XE) and what a DBA need to prepare to run 19c as a long-term release.

    Agenda und Anmeldung (bald möglich): Fokustag Datenbanken in Dresden
  • And last, but not least, the big Swiss Oracle User Group Day takes part at May, 22nd at Olten. I will speak there again about 19c New Features.

Some more presentations...

Hi all,

so many new things, so many testing, so many work to do at the moment.
But despite the fact I am testing Oracle Autonomous TP Cloud and PostgreSQL (last week I have finished the tests with pgBackRest as Robotron Standard Backup tool for our PostgreSQL customers) I am planning for some more presentations at events at Switzerland and Germany.

At the moment the following events are confirmed, unfortunately, for the english speaking people, they are all in german language.

  • 26.03.2019, Oracle Fokustag at Robotron Schweiz. I will speak about 19c New Features and a little bit about the new version of DBSAT (Database Security Assesment Tool), I think. 
  • 9./10.04.2019, Oracle Database Appliance Events at Stade de Suisse, Bern, and at Oracles Smart Innovation Center at the Prime Tower Zurich. Main focus there is how the ODA can help you on premise to find your way to the cloud and why Standard Edition 2 fits perfect to Oracles Database Appliance. 
  • Then, in May (7th), I will speak at the Oracle Fokustag of Robotrons HQ at Dresden, also about 18c/19c New Features and what a DBA need to prepare to run 19c as a long-term release.
  • And last, but not least, the big Swiss Oracle User Group Day takes part at May, 22nd at Olten. I will speak there again about 19c New Features. 

Hope to see you at one or more events!

Guest Addition Installation "hangs" on Oracle VirtualBox after Upgrade from 5.2.22 to 6.0.4

I like running Oracles VirtualBox on my laptop for tests and demos. It is very easy to create new virtual machines, make snapshots, do test, roll back and forward to different stages.

In my holidays, Oracle now have released a new version of Virtualbox - version 6. As I was running 5.2.22 on my laptop, I downloaded the newest version (6.0.4) and installed it on my machine. The VMs started like they have done before, but as I do need the guest additions (for shared folders, etc.) I needed to upgrade them also.

When you start a VM, the VirtualBox software automatically checks for the existence of an older guest additions version and asks you, if you want to download/update to the newest version. You should do so - but in my case, after the download and the initial installation was done, the Run failed.
I was logged into my VM as root and the VBox Guest Addition iso was mounted automatically. Then it asked me to start the installation:


As I trusted this, I just started the installation by pressing the "Run" button. But the installation started a new terminal session and then it was hanging after some seconds at "Removing installed version of VirtualBox Guest Additions".


This shouldn't happen as it normally a very quick task. I tried to restart the VM, but it still stopped at the removing part of the Guest Additions.

I don't know, why this happened, but I was able to solve it by opening a terminal window as root and starting "autorun.sh" manually:




Doing this, the old Guest Additions were removed in some seconds and the new guest addition kernel was installed afterwards successfully (building was done in some minutes).


After restarting the VM, I was able to use the shared folders again.

So, if you have a problem with the Guest Additions after upgrading your VirtualBox software, try to start the Guest Addition installation from inside a root terminal window instead of using the normal autorun feature (even if you are logged in as root).

Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Relocate a PDB on-the-fly

Two parts of my three part blog posts are already available, so this is the third and (at the moment) last post about Single Tenancy and what you can do easily with pluggable databaseses using database links.

What you do need to use the scripts as they are can be found at my first blog entry.

Case 3 - Relocate a PDB on the fly:

There are different use cases for relocating a pdb on the fly, the most common are to move a pdb to another hardware or to the/another cloud infrastructure.
As long as the "copy" is not opened read-write, the old PDB stays where it is. So you can, e.g., create a remote_listener entry for the pdb at the new environment and let it point to the listener at the environment you want to move from. If you do so, the remote listener (at the original location) will work as some kind of proxy to the listener at the new environment. The new PDB then can be used, e.g. by an application server, without any changes at the tns/jdbc connect. Be aware, that the service name must be the same on both sides for the relocated PDB and that you need to change the application connect to the new location somewhere later. If you forget this and you start decommissioning the old environment, the application and users can not work with the moved pdb.

How do you have moved a database with the "good old times" tools? Export and import? Well, this means you get a nice, reorganised database, but this isn't a fast approach if you look at downtime. Creating a data guard environment or using any replication tools is sometimes not possible due to license issues or costs (Standard Edition, Golden Gate, Quest Shareplex, ...). Maybe you can clone the database using rman and recover the new environment manually, but this is a lot work and you may make a small mistake somewhere and the environment isn't recoverable. So the easiest and last chance is to shutdown the production and move everything (using compressing tools or maybe not) to the new environment. And the copy takes time and all the time the database is down...

What would you say to the following solution in short:
- Create a database link between two databases
- Enter one statement to move the database in the backgroud and let it be mounted (so you can work with the original production).
- Open the database at the new location (which means it is a last time synchronized with the original PDB and after that the original PDB is deleted automatically)

Sounds easys? IT IS that easy!

What are the steps you need to do in detail?

  1. Create a common user and grant him rights (SYSOPER!), (let) open the PDB you want to move to another host/to the cloud
  2. Create a database link to the CDB$ROOT (!) you want to move from the CDB$ROOT you want the PDB moved to.
  3. Create the pdb with RELOCATE at the second CDB.
  4. Open the new PDB read write (this step synchronizes the PDBs and drops the old one afterwards).
Script to run, connects first to the source, prepares the user, second to the target CDB, prepares the database link and relocates the database.

Again one last comment: I didn't tested the scripts as I have thrown things out. If something does not work, please come back and leave a comment, so I can fix it.
Thanks a lot!





Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Remote snapshot clone

Some people may wait for the second part of my three blog posts I am writing about Single Tenancy and why (some / a lot / most) people do think it is unnecessary complexity or - even worser - it is unnecessary at all.

What you do need to use the scripts as they are can be found at my first blog entry.

Case 2 - Remote snapshot clone of a PDB:

Snapshoting databases using a database link does have a lot of use cases.
The following two of them I do like most:
"Fast Provisioning", e.g. of nightly builds to testers or developers and application upgrades.

For remote snapshot pdb clones you either use file systems with snapshot technology included (like ACFS or ZFS) or you use traditional file systems (then you need to set CLONE_DB=true at your spfile and restart the database).

Well, how do you provision a nightly build today in 5 minutes to your testers and developers? On 40 machines? RMAN clone? Datapump export and import? What about the disk space you need for that? Like to spend money on it? Then tell me which shares I should buy before you throw the money out of the window. 😉

Or how do you make an application upgrade? With a non CDB database you can use offline backup the database, online backup but note down the last SCN, configure flashback database, ... And how long does it need to return to the before-upgrade-image if something goes wrong with the application upgrade?

Guys, years ago I was an app consultant and I had to go back with an installation of an application three times (at the same customer), because the business people did find some issues they haven't seen (tested) on the test system before. It took me hours and a lot nerves to restore everything to the right point-in-time and to setup the standby environments later on again...And if you do this three times in 4 months... 😟

So, the trick you will use for all of these uses cases is in the future - remote snapshot clone. Fast, reliable, you don't need to touch the "production", because all changes are done in your "new" snapshoted PDB.

What are the steps you need to do?



  1. Create a common user and grant him rights, open the PDB you want to make a snapshot from read only (if you don't use a snapshot capable file system);
  2. Create a database link to the PDB you want to clone from the CDB$ROOT you want the PDB cloned to.
  3. Create the snapshot clone remote at the second CDB.
  4. Open the new PDB read write
  5. Make your application upgrade or develop or test
    1. if it is sucessful / if you need to keep the PDB - "alter pluggable database <xyz> materialize;"
    2. if it is not successful or before you create another nightly build - drop the new PDB and open the original source PDB read write again
Script 1 to run at the source CDB$ROOT container prepares the source PDB to be snapshoted.
Script 2 to run at the target CDB$ROOT container where the snapshot clone should be created

Some notes to point 6: If you have dropped the snapshot clone PDB the source PDB can't be open read write. The error message you get is not telling you the truth, because the PDB files are on OS level also set to READ. So what you do need to do is to change the permission on OS level for the database files. So the right command will look like

sudo chmod 640 /u01/app/oracle/oradata/XFEE1/77F0ADE89C820516E0534738A8C0802B/datafile/*

After you have made these changes, the source PDB can be opened read write again. It's part of the script 2, but as the folder names are different on different databases, I have made a comment out of it.

One last comment: I didn't tested the scripts as I have thrown things out. If something does not work, please come back and leave a comment, so I can fix it.
Thanks a lot!


By the way - happy New Year 2019 everyone!