This is the third post of my series regarding Oracle Remote Listener and Standard Edition 2 databases. The first was regarding the Setup of Remote Listener and the second post was about how helpful a Remote Listener can be for assuring High Availability with SE 2.
One of the questions I often get is, what happens, if one clones a pluggable database (PDB) from a root container (CDB$ROOT) to another, e.g. from a production system to a test database. It could be awful, if the cloned PDB registers itself at the Remote Listener and the application starts to work with the production PDB and the test PDB on different connections.
Let's have a deeper look into that. If you haven't read the previous post(s), please do so, because I will refer to them.
The easiest thing to not allow a PDB to register with the Remote Listener is to put it into a CDB that runs on a server which is not in the listener.ora list of invited nodes (REGISTRATION_INVITED_NODES_LISTENER). But very, very often, a standby or Standard Edition High Availability (SEHA) environment does share one node with the test database. It is therefore not a wise decision to rely on this parameter.
But what about the parameter for the remote listener, is it inherited with a PDB copy?
If one looks at the parameter for the remote listener per container, one will see that the remote listener entries are displayed by the PDB in exactly the same way as in the root container (remote listener host name has also been “masked” here). The following screenshot does show the parameter for Container ID 3 (my PDB) and for Container ID 1:
What happens if this PDB is now cloned to a new, empty test database via a database link? The new test CDB (called NRLORCL which is standing for No-Remote-Listener-ORCL) runs in parallel to the standby database on the same server (second node), which is a very usual setup for many customers.
There are no own PDBs on the test CDB:
There is no remote listener setup on the new test CDB:
The services present in the CDB before cloning are only those of the root container:
After creating the user with the correct rights on the production database and a database link, the production PDB (with a remote listener setup) can now be cloned to the test CDB:
As long as the PDB is closed (mounted), there are now services running inside of the test CDB. While starting the PDB the first time, the new PDB services are created. We can find them in cdb_services.
Thank god, nothing worse happend. The remote_listener parameter is not cloned with the PDB and running from the new test CDB.
As the NRLORCL (test CDB) as root container also has no remote listener entry for the remote listener host, having a running JSO_APP_SERVICE only (but worse enough) has a local effect - i.e. if the database were then operated as primary on the standby node (which is the same system where the test CDB is running), the LOCAL LISTENER would have registered the JSO_APP_SERVICE service twice on this node! Once for the open primary DB and once for the cloned PDB on the test database - something that should be avoided!
Even more worse: if the NRLORCL has the same remote listener parameter setup as the ORCL database, the remote listener parameter would again be inherited to the jsoclonedtestpdb.
This
means that the service would also be active twice on the remote
listener - once for the primary-standby database combination and once
for the test database. What a mess!
One should therefore always bear this in mind:
All
application database services are moved to the new server with a PDB
clone. A SERVICE_NAME_CONVERT is absolutely necessary for a remote pdb
clone with own services as otherwise the original AND the clone will
register at the remote listener if a corresponding remote listener
parameter is set for the CDB. This would result in connections being
established sometimes to one DB and sometimes to the other.
This might be desirable for a RAC (Real Application Cluster), but never ever for a normal database!
No comments:
Post a Comment
Your comment will be published soon...