Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Remote refreshable clone

My presentation (in german) at the DOAG conference was about Single Tenancy and why (some / a lot / most) people do think it is unnecessary complexity or - even worser - it is unnecessary at all.

Yes, my first thoughts, as container based databases have been introduced in 12.1 were the same. I found that it is unnecessary at all. With 12.2 I changed my mind (a little bit or even a little bit more) as there are really nice features included, which can be used for free, even on Standard Edition 2.
I don't they, you should or must use these features, but as a Consultant and DBA I always want to help my customers doing things better or to free up the work life from boring stuff.
And what is more boring than staying in the office or working from home at the night or at a weekend when you could have time to play with your kids, make a trip to a cool event or meet some friends?
So if you can't live without spending times outside of your office hours with work, don't read this article further. 😏

For the DOAG I have created a live demo scenario on my notebook (with a Linux VM), so I would like to share not only my knowledge, but also part of my scripts, so that you can test everything on your own.

What you do need to use the scripts as they are, so you can copy paste them is not that much.

Preparation:
  • 2 Container databases XFEE1 and XFEE2 (XF is my naming convention on test systems for XFS file systems, AC for ACFS and AS for ASM, EE means Enterprise Edition, SE means Standard Edition - the tests do work also with Standard Edition Databases and on ACFS/ZFS without any changes).
  • 1 Pluggable database EE1PDB1 inside of XFEE1
  • TNSNames.ora entries to reach XFEE1, XFEE2, EE1PDB1 and EE2PDB1 (EE2PDB1 is later on used for cloned/snapshot cloned PDB1 inside of XFEE2)
  • You need to set CLONE_DB true (alter system set clone_db=true scope=spfile;) for non-ACFS/ZFS file systems. Restart the database after setting it.
  • To not violate any license rules you can also set MAX_PDBS to one (alter system set max_pdbs=1 scope=both;)
  • Set all passwords to "oracle" (or change the scripts)
Case 1 - Refreshable Remote Copy of a PDB:

Some people do think about using the refreshable remote copy of a pdb as a poor men's standby database, but the best use case I can think of is setting up test environments (for applications or even for database updates) from your production. Another use case is to create a reporting PDB which is automatically refreshed ones a day because you can open the refreshable remote copy read only (and as long as it is opened read only no refresh happen).

Think about the following case: You do have your production database up and running and you should create a test environment. The business guys tell you, that they need the test environment in 3 days and the data should be the one after business hours or more horibble at saturday night.

What is your way to do this? Creating a standby database with Data Guard (not for SE2) and stop recover at the evening you should do? So you can prepare things and don't need to work a lot that evening? Or do you prepare an empty database and wait for that evening, then run an export on the production and an import at the test environment? How long does this take and how many hours do you spend that evening doing all this work?
A quick, fast solution is to create a refreshable remote copy of a pdb. All you need to do is to set it up the days before "that evening" comes, you let it refresh automatically all minutes and all you need to do at "that evening" is to stop the refresh and open the database. Maybe, due to application reasons, you need to run a SQL script which is always needed (also with Standby or with export/import). If you have tested that once or twice, you could even create a cronjob that does a last refresh and opens the database / run your application script when it is time to. So you don't need to work at the evening or weekend at all. 😊

If you want to test that, you need the things I have mentioned at the preparation part of this post. And then you can just run the following scripts.
The steps in the script are the following (you don't need to create the common user at the receiving database, I have just re-used the picture):


  1. Create a common user and grant him rights
  2. Create a database link to the PDB you want to clone from the CDB$ROOT you want the PDB cloned to.
  3. Create the refreshable PDB clone and let it run until you need it
  4. Open the automatically refreshed PDB read-write
Script 1 to run at the CDB$ROOT of the production.
Script 2 to run at the CDB$ROOT where you want the PDB cloned to.

Hope that will help you at your daily work. Stay tuned, there are more use cases where it is really nice to use PDBs instead of doing it "the old way". I will write a new post as soon as I have time to.

As always, comments are welcome.

DOAG Conference 2018

Hi Folks,

last year I have written a small review for the DOAG conference and I want to do the same for this years conference.

I was - again - a happy speaker at the DOAG conference 2018. My presentation was about "Single Tenant is only unnecessary complexity?" and I held it at the last conference day at noon. Despite the key note before my presentation and despite of the time where all people wanted to go to lunch, I think I had more than 100 people attending it. In 45 minutes (I had a slight overtime of some seconds only) I presented some features of Single Tenancy on Oracle databases - including a live demo to show how easy it is to do in reality. As a lot people after the presentation asked me how to do this or that (they started to think about the requirements they have at home), I think it was a valuable presentation at all.

It was good to do this, even a week before I have held the presentation, my VM crashed and was unreparable and I was thinking (for some minutes only) to skip the live demo part. But - luckily - I had a one week old backup of the system so I needed only two night shifts to get everything up running again. You could have smelled my fear minutes before I started the demo - I really hoped that nothing crashes at the presentation...

Beside my own presentation the 3 days were fully packed with know how provided by well prepared speakers. A trend I have seen last year also continued this year: In my opinion the number of live demos is dropping more and more. Unfortunately, in my opinion, because I like presentations where you can really see "things are running". So, if you plan to speak somewhere at a conference: Yes, it is a shitload of work preparing everything and testing and preparing more and testing again - but it's worth it guys (by the way, my live demo was a one week workload - including the crash)!

What else, hm... the community: Even if some guys don't think it, but Oracle is "still alive" and the community is also. More than 2'000 people found their way to Nuremberg and more than 400 speakers including really famous international speakers (and Oracle product managers) were there. And the best is they are approachable - all of them can be found at the coffee lounge or somewhere else. So you can discuss, you can ask, you can get solutions for free (what is especially worthful when the Oracle support does not seem to make the job very well like it was told us at a session regarding support satisfaction). Stefan Köhler, e.g., was just sitting two afternoons at the coffee lounge - ready to discuss about performance with everyone.

Another thing I have seen at the conference - the interest to know more about alternative databases is high, so open source database talks were fully booked out. Especially when PostgreSql was mentioned at the conference planner, the rooms were filled until the last place. In my opinion, open source databases will get a bigger piece of the cake (so robotron and therefore I also will do up to 24/7 support on PostgreSql), nevertheless, they are not soooo free at all if you make a TCO calculation.  While Oracle database is a big, feature-rich database monster out of one hand, you do have in open source one main project, but if you want to use additional features you need some additional software here, a plugin there, ...

At the end, you either throw some money into a consulting company, to get help for the community software, or you throw some money into a couple of internal guys who will find out what you need and how you get it from the communities - or you buy a "open source software" by one of the vendors around the communities. The problem then is the same as with Oracle. You will have some kind of vendor lock and knowbody knows if these companies do exist in 5 or 10 years or are bought by someone and the "product" maybe is not supported anymore. It is not so easy at the moment to make the right decisions (at least I think Oracle will still exist in 10 years also). And the big companies like Amazon or Salesforce only give a small part of the development, they do, back into the community versions. As I said, there will be a rising number of open source database installations, but I think also Microsofts SQLServer and Oracles database will be the two biggest fishes in the next couple of years.

In addition to technical details there are a lot things you can hear about makeing better projects, about AI or Machine Learning, about development or DSGVO, Cloud, ...

Some last words to the organisation: It is astonishing for me how this all works year for year - especially if you know that a big part of work is done in the spare time of the DOAG members. Organizing stuff for more than 2'000 people, including food, the key notes, the party, all community actions (like apex, PL/SQL, ...), incredible. And the location - with it's different rooms and restaurants at each floor it is really perfect.
My conclusion: if you didn't had time to join the conference (it would be good to speak german as most presentations are hold in german) - try to come to Nuremberg next year (19.-22. November 2019). You hopefully find me also there again...

My first steps with Oracle Autonomous Transaction Processing Database (aka Autonomous Transaction Processing Service)

The autonomous transaction processing service or autonomous transaction processing database (service) was released some weeks ago. I will write some more posts in the next weeks when I've done more tests with this kind of service.

Today I want to bring a little light into some questions like:
 - how to set up an autonomous transaction processing service in the Oracle cloud?
 - what is the autonomous transaction processing service?
 - what can and what can't you do with this service at a first sight.

Maybe you are an Oracle DBA which has some knowledge how to setup your environment in the Oracle Cloud. So you are used to how to create your VLANs with the firewall rulesets, your nodes, your databases, your database services, your connections, you know the APIs you need to configure your cloud environment using cmd or python or... All things are configured by YOU.
When you start with the autonomous transaction processing database, you need nothing of this knowlege.

What do you need is a compartment and the right to create a autonomous transaction processing database. That's it folks!
Then we can start creating the service - documented step by step.

Setup your ATP-Service


1.) Go to your Service Overview at your Oracle cloud portal for the ATP(D) service:


2.) Press CREATE and the following screen opens.  You need to fill it out, but, as you can see, you don't need to specify any network things like you need to do e.g. for a "non-autonomous" database service.
You can specify only the name, the number of CPU Cores and the storage size and you will get an ADMIN user for which you need to specify a complex password. Last step is to subcribe either with license or to bring your own licence.



Then you return to the console where you can see that the database is getting provisioned.



While you may know from a "non-autonomous" database service that it runs some longer time to provision your environment, the autonomous transaction processing service is ready after a couple of minutes. 

3. Finished after 15 minutes!

 But the question now is, how can you access this new service as you don't have specified any rules, VLANs, additional nodes ... ?

So we need to

Connect to your ATP-Service

 1. First step is to use the ADMIN user with your password to access the detail page of your fresh created service:

 After you have signed in, you will find yourself at the service console, which is loading...


 2.) When you press the Administration link (marked red at the screenshot above) it will take you to the Administration page, where you can download the Client Credentials.



3.) After setting a password, the download of a zip file starts. You can save the zip file wherever you want. Some people now download the instant client to connect to the database, but as I do have some full installations of databases and clients at my laptop, I have integrated the connection in my normal environment.

4.) I already do have my special directory with a TNS_ADMIN variable set to, because I don't want to have a couple of different locations where I store different sqlnet.oras or TNSNames.oras. To allow my environment to use the SQLNet protocol to connect to the autonomous transactino processing database service I have copied the zip to my TNS_ADMIN directory and unzipped it locally, but you can do this with your own installation(s) like you are used to it:


Now the last steps regarding the local configuration:
5.) a) First I needed to add the sqlnet.ora properties from my wallet_ROBAUDB directory at the end of my local sqlnet.ora (so should you), so it looks like (see last, red marked, two lines):


b.) The second - and last - step isto add the tnsnames.ora properties from the wallet_ROBAUDB directory to your tnsnames.ora:


The tnsping succeeds and you are able to connect to the database:




Personally, I loke to use additional tools, and I am also able now to connect to the service with any other SQLNet related product, for example with my PLSQL-Developer:


 First Selects at the Service

The questions I had e.g. while started with the ATP(D) service: What do I really get with this database service? As it was that quick provisioned, is it a complete DB? And yes, I have heard that all autonomous database services should run as RAC with additional Data Guard setup, but is it a full database I can do with whatever I want (and with all the complexity), like it is with a "normal" database service in Oracles cloud?

To answer the questions simple - no, it isn't, what you get is a pluggable database, so you do get one container from a CDB:


You can have a look at the parameters, which are set, e.g. memory parameters:


But you aren't not allowed to change them:


What you can do is e.g. create a new user (there is a password policy on it) and afterwards start with deploying your application from scratch (using sql scripts, sqldeveloper, ...) or you can put your application dump into it.


Enough stuff for my first post regarding the Oracle Autonomous Transaction Processing Database.

The next post, I think, will be about how to get your data into this service and a little bit about performance and/or security.

Stay tuned!

DBT-00007 User does not have the appropiate write privilege when starting dbca

At the moment I am preparing some virtualboxes to create the live demos for my presentation about "Single Tenancy is only more complexity!?" at the DOAG Conference 2018.

For this, I have setup a fresh new Linux and installed 18c Grid Infrastructure into it (using role separation, so GI is installed with the grid user, the databases should be installed with the oracle user).

After I have created my 2 ORACLE_HOMEs with the Standard Edition 2 and the Enterprise Edition database (Software only) I wanted to set up the first databases, but unfortunately the dbca (Database configuration assistant) struggled with "a [DBT-00007] User does not have the appropriate write privilege when starting dbca" error. Since I NEVER have seen that error before, I really was wondering, what happens (and I don't think I have missed some steps . And no, I don't want to check this with the Oracle support, like mentioned at the popup window details.

I have setup my ORACLE_BASE and ORACLE_HOME before I have started dbca, so this isn't the cause for this error. As the popup window does not shown anything in addition, I tried to check the log of the dbca at $ORACLE_BASE/cfgtoollogs/dbca - but there wasn't any.
So the problem seemed to by correlated to this directory.

The owner of $ORACLE_BASE/cfgtoollogs was grid (group oinstall) and the rights set were 755 - so only the grid user was allowed to write, group members weren't.

To change this I have first changed the owner of the cfgtoollogs directory:

chown -hR oracle:oinstall /u01/app/oracle/cfgtoollogs

and afterwards I have given write rights to all subdirectories, so also the grid user would still be able to add his stuff to the log directory:


chmod 775 /u01/app/oracle/cfgtoollogs/
chmod 770 /u01/app/oracle/cfgtoollogs/dbca
chmod 770 /u01/app/oracle/cfgtoollogs/asmca
chmod 770 /u01/app/oracle/cfgtoollogs/netca


That's it folks, the dbca started after I have made this changes.