Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Relocate a PDB on-the-fly

Two parts of my three part blog posts are already available, so this is the third and (at the moment) last post about Single Tenancy and what you can do easily with pluggable databaseses using database links.

What you do need to use the scripts as they are can be found at my first blog entry.

Case 3 - Relocate a PDB on the fly:

There are different use cases for relocating a pdb on the fly, the most common are to move a pdb to another hardware or to the/another cloud infrastructure.
As long as the "copy" is not opened read-write, the old PDB stays where it is. So you can, e.g., create a remote_listener entry for the pdb at the new environment and let it point to the listener at the environment you want to move from. If you do so, the remote listener (at the original location) will work as some kind of proxy to the listener at the new environment. The new PDB then can be used, e.g. by an application server, without any changes at the tns/jdbc connect. Be aware, that the service name must be the same on both sides for the relocated PDB and that you need to change the application connect to the new location somewhere later. If you forget this and you start decommissioning the old environment, the application and users can not work with the moved pdb.

How do you have moved a database with the "good old times" tools? Export and import? Well, this means you get a nice, reorganised database, but this isn't a fast approach if you look at downtime. Creating a data guard environment or using any replication tools is sometimes not possible due to license issues or costs (Standard Edition, Golden Gate, Quest Shareplex, ...). Maybe you can clone the database using rman and recover the new environment manually, but this is a lot work and you may make a small mistake somewhere and the environment isn't recoverable. So the easiest and last chance is to shutdown the production and move everything (using compressing tools or maybe not) to the new environment. And the copy takes time and all the time the database is down...

What would you say to the following solution in short:
- Create a database link between two databases
- Enter one statement to move the database in the backgroud and let it be mounted (so you can work with the original production).
- Open the database at the new location (which means it is a last time synchronized with the original PDB and after that the original PDB is deleted automatically)

Sounds easys? IT IS that easy!

What are the steps you need to do in detail?

  1. Create a common user and grant him rights (SYSOPER!), (let) open the PDB you want to move to another host/to the cloud
  2. Create a database link to the CDB$ROOT (!) you want to move from the CDB$ROOT you want the PDB moved to.
  3. Create the pdb with RELOCATE at the second CDB.
  4. Open the new PDB read write (this step synchronizes the PDBs and drops the old one afterwards).
Script to run, connects first to the source, prepares the user, second to the target CDB, prepares the database link and relocates the database.

Again one last comment: I didn't tested the scripts as I have thrown things out. If something does not work, please come back and leave a comment, so I can fix it.
Thanks a lot!





Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Remote snapshot clone

Some people may wait for the second part of my three blog posts I am writing about Single Tenancy and why (some / a lot / most) people do think it is unnecessary complexity or - even worser - it is unnecessary at all.

What you do need to use the scripts as they are can be found at my first blog entry.

Case 2 - Remote snapshot clone of a PDB:

Snapshoting databases using a database link does have a lot of use cases.
The following two of them I do like most:
"Fast Provisioning", e.g. of nightly builds to testers or developers and application upgrades.

For remote snapshot pdb clones you either use file systems with snapshot technology included (like ACFS or ZFS) or you use traditional file systems (then you need to set CLONE_DB=true at your spfile and restart the database).

Well, how do you provision a nightly build today in 5 minutes to your testers and developers? On 40 machines? RMAN clone? Datapump export and import? What about the disk space you need for that? Like to spend money on it? Then tell me which shares I should buy before you throw the money out of the window. 😉

Or how do you make an application upgrade? With a non CDB database you can use offline backup the database, online backup but note down the last SCN, configure flashback database, ... And how long does it need to return to the before-upgrade-image if something goes wrong with the application upgrade?

Guys, years ago I was an app consultant and I had to go back with an installation of an application three times (at the same customer), because the business people did find some issues they haven't seen (tested) on the test system before. It took me hours and a lot nerves to restore everything to the right point-in-time and to setup the standby environments later on again...And if you do this three times in 4 months... 😟

So, the trick you will use for all of these uses cases is in the future - remote snapshot clone. Fast, reliable, you don't need to touch the "production", because all changes are done in your "new" snapshoted PDB.

What are the steps you need to do?



  1. Create a common user and grant him rights, open the PDB you want to make a snapshot from read only (if you don't use a snapshot capable file system);
  2. Create a database link to the PDB you want to clone from the CDB$ROOT you want the PDB cloned to.
  3. Create the snapshot clone remote at the second CDB.
  4. Open the new PDB read write
  5. Make your application upgrade or develop or test
    1. if it is sucessful / if you need to keep the PDB - "alter pluggable database <xyz> materialize;"
    2. if it is not successful or before you create another nightly build - drop the new PDB and open the original source PDB read write again
Script 1 to run at the source CDB$ROOT container prepares the source PDB to be snapshoted.
Script 2 to run at the target CDB$ROOT container where the snapshot clone should be created

Some notes to point 6: If you have dropped the snapshot clone PDB the source PDB can't be open read write. The error message you get is not telling you the truth, because the PDB files are on OS level also set to READ. So what you do need to do is to change the permission on OS level for the database files. So the right command will look like

sudo chmod 640 /u01/app/oracle/oradata/XFEE1/77F0ADE89C820516E0534738A8C0802B/datafile/*

After you have made these changes, the source PDB can be opened read write again. It's part of the script 2, but as the folder names are different on different databases, I have made a comment out of it.

One last comment: I didn't tested the scripts as I have thrown things out. If something does not work, please come back and leave a comment, so I can fix it.
Thanks a lot!


By the way - happy New Year 2019 everyone!

Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Remote refreshable clone

My presentation (in german) at the DOAG conference was about Single Tenancy and why (some / a lot / most) people do think it is unnecessary complexity or - even worser - it is unnecessary at all.

Yes, my first thoughts, as container based databases have been introduced in 12.1 were the same. I found that it is unnecessary at all. With 12.2 I changed my mind (a little bit or even a little bit more) as there are really nice features included, which can be used for free, even on Standard Edition 2.
I don't they, you should or must use these features, but as a Consultant and DBA I always want to help my customers doing things better or to free up the work life from boring stuff.
And what is more boring than staying in the office or working from home at the night or at a weekend when you could have time to play with your kids, make a trip to a cool event or meet some friends?
So if you can't live without spending times outside of your office hours with work, don't read this article further. 😏

For the DOAG I have created a live demo scenario on my notebook (with a Linux VM), so I would like to share not only my knowledge, but also part of my scripts, so that you can test everything on your own.

What you do need to use the scripts as they are, so you can copy paste them is not that much.

Preparation:
  • 2 Container databases XFEE1 and XFEE2 (XF is my naming convention on test systems for XFS file systems, AC for ACFS and AS for ASM, EE means Enterprise Edition, SE means Standard Edition - the tests do work also with Standard Edition Databases and on ACFS/ZFS without any changes).
  • 1 Pluggable database EE1PDB1 inside of XFEE1
  • TNSNames.ora entries to reach XFEE1, XFEE2, EE1PDB1 and EE2PDB1 (EE2PDB1 is later on used for cloned/snapshot cloned PDB1 inside of XFEE2)
  • You need to set CLONE_DB true (alter system set clone_db=true scope=spfile;) for non-ACFS/ZFS file systems. Restart the database after setting it.
  • To not violate any license rules you can also set MAX_PDBS to one (alter system set max_pdbs=1 scope=both;)
  • Set all passwords to "oracle" (or change the scripts)
Case 1 - Refreshable Remote Copy of a PDB:

Some people do think about using the refreshable remote copy of a pdb as a poor men's standby database, but the best use case I can think of is setting up test environments (for applications or even for database updates) from your production. Another use case is to create a reporting PDB which is automatically refreshed ones a day because you can open the refreshable remote copy read only (and as long as it is opened read only no refresh happen).

Think about the following case: You do have your production database up and running and you should create a test environment. The business guys tell you, that they need the test environment in 3 days and the data should be the one after business hours or more horibble at saturday night.

What is your way to do this? Creating a standby database with Data Guard (not for SE2) and stop recover at the evening you should do? So you can prepare things and don't need to work a lot that evening? Or do you prepare an empty database and wait for that evening, then run an export on the production and an import at the test environment? How long does this take and how many hours do you spend that evening doing all this work?
A quick, fast solution is to create a refreshable remote copy of a pdb. All you need to do is to set it up the days before "that evening" comes, you let it refresh automatically all minutes and all you need to do at "that evening" is to stop the refresh and open the database. Maybe, due to application reasons, you need to run a SQL script which is always needed (also with Standby or with export/import). If you have tested that once or twice, you could even create a cronjob that does a last refresh and opens the database / run your application script when it is time to. So you don't need to work at the evening or weekend at all. 😊

If you want to test that, you need the things I have mentioned at the preparation part of this post. And then you can just run the following scripts.
The steps in the script are the following (you don't need to create the common user at the receiving database, I have just re-used the picture):


  1. Create a common user and grant him rights
  2. Create a database link to the PDB you want to clone from the CDB$ROOT you want the PDB cloned to.
  3. Create the refreshable PDB clone and let it run until you need it
  4. Open the automatically refreshed PDB read-write
Script 1 to run at the CDB$ROOT of the production.
Script 2 to run at the CDB$ROOT where you want the PDB cloned to.

Maybe you try to refresh or set the refresh mode to none (see script 2) and you get an

ORA-65118: operation affecting a pluggable database cannot be performed from another pluggable database
 
If so, just set the container to the pdb ( alter session set container=EE2PDB1; )  and try again.

Hope that will help you at your daily work. Stay tuned, there are more use cases where it is really nice to use PDBs instead of doing it "the old way". I will write a new post as soon as I have time to.

As always, comments are welcome.

You can find the second post regarding remote snapshot clone here

DOAG Conference 2018

Hi Folks,

last year I have written a small review for the DOAG conference and I want to do the same for this years conference.

I was - again - a happy speaker at the DOAG conference 2018. My presentation was about "Single Tenant is only unnecessary complexity?" and I held it at the last conference day at noon. Despite the key note before my presentation and despite of the time where all people wanted to go to lunch, I think I had more than 100 people attending it. In 45 minutes (I had a slight overtime of some seconds only) I presented some features of Single Tenancy on Oracle databases - including a live demo to show how easy it is to do in reality. As a lot people after the presentation asked me how to do this or that (they started to think about the requirements they have at home), I think it was a valuable presentation at all.

It was good to do this, even a week before I have held the presentation, my VM crashed and was unreparable and I was thinking (for some minutes only) to skip the live demo part. But - luckily - I had a one week old backup of the system so I needed only two night shifts to get everything up running again. You could have smelled my fear minutes before I started the demo - I really hoped that nothing crashes at the presentation...

Beside my own presentation the 3 days were fully packed with know how provided by well prepared speakers. A trend I have seen last year also continued this year: In my opinion the number of live demos is dropping more and more. Unfortunately, in my opinion, because I like presentations where you can really see "things are running". So, if you plan to speak somewhere at a conference: Yes, it is a shitload of work preparing everything and testing and preparing more and testing again - but it's worth it guys (by the way, my live demo was a one week workload - including the crash)!

What else, hm... the community: Even if some guys don't think it, but Oracle is "still alive" and the community is also. More than 2'000 people found their way to Nuremberg and more than 400 speakers including really famous international speakers (and Oracle product managers) were there. And the best is they are approachable - all of them can be found at the coffee lounge or somewhere else. So you can discuss, you can ask, you can get solutions for free (what is especially worthful when the Oracle support does not seem to make the job very well like it was told us at a session regarding support satisfaction). Stefan Köhler, e.g., was just sitting two afternoons at the coffee lounge - ready to discuss about performance with everyone.

Another thing I have seen at the conference - the interest to know more about alternative databases is high, so open source database talks were fully booked out. Especially when PostgreSql was mentioned at the conference planner, the rooms were filled until the last place. In my opinion, open source databases will get a bigger piece of the cake (so robotron and therefore I also will do up to 24/7 support on PostgreSql), nevertheless, they are not soooo free at all if you make a TCO calculation.  While Oracle database is a big, feature-rich database monster out of one hand, you do have in open source one main project, but if you want to use additional features you need some additional software here, a plugin there, ...

At the end, you either throw some money into a consulting company, to get help for the community software, or you throw some money into a couple of internal guys who will find out what you need and how you get it from the communities - or you buy a "open source software" by one of the vendors around the communities. The problem then is the same as with Oracle. You will have some kind of vendor lock and knowbody knows if these companies do exist in 5 or 10 years or are bought by someone and the "product" maybe is not supported anymore. It is not so easy at the moment to make the right decisions (at least I think Oracle will still exist in 10 years also). And the big companies like Amazon or Salesforce only give a small part of the development, they do, back into the community versions. As I said, there will be a rising number of open source database installations, but I think also Microsofts SQLServer and Oracles database will be the two biggest fishes in the next couple of years.

In addition to technical details there are a lot things you can hear about makeing better projects, about AI or Machine Learning, about development or DSGVO, Cloud, ...

Some last words to the organisation: It is astonishing for me how this all works year for year - especially if you know that a big part of work is done in the spare time of the DOAG members. Organizing stuff for more than 2'000 people, including food, the key notes, the party, all community actions (like apex, PL/SQL, ...), incredible. And the location - with it's different rooms and restaurants at each floor it is really perfect.
My conclusion: if you didn't had time to join the conference (it would be good to speak german as most presentations are hold in german) - try to come to Nuremberg next year (19.-22. November 2019). You hopefully find me also there again...

For the guys looking for the scripts of my presentation: You can find them here