An underestimated feature: Using Oracle Remote Listener with Standard Edition 2 Licenses - Part 3 - What happens doing a pdb remote clone?

This is the third post of my series regarding Oracle Remote Listener and Standard Edition 2 databases. The first was regarding the Setup of  Remote Listener and the second post was about how helpful a Remote Listener can be for assuring High Availability with SE 2. 

One of the questions I often get is, what happens, if one clones a pluggable database (PDB) from a root container (CDB$ROOT) to another, e.g. from a production system to a test database. It could be awful, if the cloned PDB registers itself at the Remote Listener and the application starts to work with the production PDB and the test PDB on different connections. 

Let's have a deeper look into that. If you haven't read the previous post(s), please do so, because I will refer to them. 

The easiest thing to not allow a PDB to register with the Remote Listener is to put it into a CDB that runs on a server which is not in the listener.ora list of invited nodes (REGISTRATION_INVITED_NODES_LISTENER). But very, very often, a standby or Standard Edition High Availability (SEHA) environment does share one node with the test database. It is therefore not a wise decision to rely on this parameter. 

But what about the parameter for the remote listener, is it inherited with a PDB copy?
If one looks at the parameter for the remote listener per container, one will see that the remote listener entries are displayed by the PDB in exactly the same way as in the root container (remote listener host name has also been “masked” here). The following screenshot does show the parameter for Container ID 3 (my PDB) and for Container ID 1:

Same parameter value for Container 1 and Container 3

What happens if this PDB is now cloned to a new, empty test database via a database link? The new test CDB (called NRLORCL which is standing for No-Remote-Listener-ORCL) runs in parallel to the standby database on the same server (second node), which is a very usual setup for many customers.
There are no own PDBs on the test CDB:

only CDB$ROOT and PDB$SEED in v$containers

There is no remote listener setup on the new test CDB:

No value set for REMOTE_LISTENER in new CDB

 The services present in the CDB before cloning are only those of the root container:

Only CDB$ROOT standard services

After creating the user with the correct rights on the production database and a database link, the production PDB (with a remote listener setup) can now be cloned to the test CDB:

As long as the PDB is closed (mounted), there are now services running inside of the test CDB. While starting the PDB the first time, the new PDB services are created. We can find them in cdb_services.

Services including new PDB services 
 
The JSO_APP_SERVICE is cloned and created together with the new standard PDB service, but it is active? What do you think? Well, let's look into v$active_services:
 
 
This is, because we used "alter pluggable database save state" to get all the PDB services running when we do a switch to another node with the production. What if the REMOTE_LISTENER parameter was now cloned together with the PDB? Do we have two open databases getting connections from the application? The one in production and our cloned test PDB?

REMOTE_LISTENER is not cloned with the PDB


Thank god, nothing worse happend. The remote_listener parameter is not cloned with the PDB and running from the new test CDB. 

As the NRLORCL (test CDB) as root container also has no remote listener entry for the remote listener host, having a running JSO_APP_SERVICE only (but worse enough) has a local effect - i.e. if the database were then operated as primary on the standby node (which is the same system where the test CDB is running), the LOCAL LISTENER would have registered the JSO_APP_SERVICE service twice on this node! Once for the open primary DB and once for the cloned PDB on the test database - something that should be avoided!

Even more worse: if the NRLORCL has the same remote listener parameter setup as the ORCL database, the remote listener parameter would again be inherited to the jsoclonedtestpdb.

This means that the service would also be active twice on the remote listener - once for the primary-standby database combination and once for the test database. What a mess!

One should therefore always bear this in mind:
All application database services are moved to the new server with a PDB clone. A SERVICE_NAME_CONVERT is absolutely necessary for a remote pdb clone with own services as otherwise the original AND the clone will register at the remote listener if a corresponding remote listener parameter is set for the CDB. This would result in connections being established sometimes to one DB and sometimes to the other. 

This might be desirable for a RAC (Real Application Cluster), but never ever for a normal database!


An underestimated feature: Using Oracle Remote Listener with Standard Edition 2 Licenses - Part 2 (High Availability with Standby DB)

This blog post is part 2 of my series "Remote Listener and Standard Edition 2". The first blog post was about the setup of the Remote Listener, now the second is how one can use a Remote Listener for a more or less transparent failover of new connects to a database which is createad as high available standby environment. 

Brief recap:
There are two database servers with a primary standby database (ORCL) and a remote listener set up on a third server. (While one could use "Connection Manager" (CMAN) for Enterprise Edition databases, the license does not allow to use this for Standard Edition 2 databases.) Which kind of primary-standby solution is used does not matter - as they all work same more or less for Standard Edition Databases (depending on archive redo log shipping). 

The solution cookbook (which also works with Standard Edition High Availability (SEHA)).

The PDB is called jsotestpdb and does not yet have its own service for the application. However, as this makes sense, a service with the name JSO_APP_SERVICE is created.
 
The service can be created and started on the primary database using the following syntax:

exec dbms_service.create_service('JSO_APP_SERVICE','JSO_APP_SERVICE');

After the service is created, it can be started: 

exec dbms_service.start_service ('JSO_APP_SERVICE');

As it makes sense for high availability reasons that this service is also started automatically during a switchover, either a “SAVE STATE” of the PDB must be executed or a trigger must be created that starts this service when the PDB is opened (the third option would be to create and start a service in the grid infrastructure). Services are only started when the PDB is opened, this means, it is ensured that this service is never started on the standby side.
The simplest method to execute an automated service start is certainly the save state command (when the service is active):

alter pluggable database jsotestpdb save state;


On the listener side, this now looks as follows (the remote listener host name is hidden):

 

 
It is important that both root containers (i.e. the primary and the standby database in mount status) are registered on the listener (section marked as snip from previous picture). 

Also highlighted is the new created and started PDB application service. 

Everything is in place now to prepare the client connections. Therefore an entry is created in TNSNames.ora with the name of the remote listener host as HOST (remote listener host name is hidden in the following picture).

Now one can connect with sqlplus and select the host name, where the connection was routed to (where the primary database is running on).


 


As one can see, the connection was routed to the PDB running on the WINTEST-1 node, because this is the primary database. The standby is running on the WINTEST-2 node and the application service is not running there (the PDB is mounted, not opened). Now doing a switchover (with the toolset of your standby software provider) and reconnect again, after the PDB was opened. We still use the same tnsnames.ora which points to the remote listener host and the service registered there.

 
Now the connection is on the WINTEST-2 node, because this is the primary environment now, while WINTEST-1 is now the standby system. Without changing anything on the application, on tnsnames.ora, without any "failover hickups" by timed out connection retries the application does connect direct to the running PDB. That's easy, isn't it?

Oracle Database Appliance Patch Update error when default gateway is not pingable - DCS-10001 and PRGO-1664

When setting up an Oracle Database Appliance (ODA) the configuration of the network with odacli configure-firstnet does check, if the default gateway is pingable. If it is not, the configuration is not possible. 

But one of our customers (or better the network administrators) later configured the environment (router), that pings are not responded, which means, they time out (this is part of the security strategy of this company). The network does work, but the ODA can not ping the default gateway anymore (no one can).

As we tried to patch an ODA from ODA Release 19.21 to 19.24 we therefore run into an ugly error while the Grid Infrastructure was upgraded from 19.21 to 19.24, leaving us with two unzipped home directories:

"DCS-10001:Internal error encountered: Failed to set ping target on public network."

If you take a look into the dcs-agent.log, you can find a more detailled error message:

DEBUG [Server patching : JobId=<something>] [] c.o.d.a.r.s.g.GiOperations: ping failed: unknown gateway <IP-Address of Gateway>

The creation of the prepatch report for the server was fine ([root@oda1 opt]# /opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.24.0.0.0). It finished successful without error, so we didn't expect that we run into an error later on. 

But while running the server patch ([root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-server -v 19.24.0.0.0) we got the above error. 

We asked the network team to fix the ping problem, tested the ping manually and restarted the odacli update-server -v 19.24.0.0.0 a second time. Sometimes, when an ODA update stucks, it is fine to just restart the update-server again, but this time we got another error:

 “DCS-10001:Internal error encountered: Failed to patch GI with RHP : DCS-10001:Internal error encountered: PRGO-1664 : The specified source working copy \"OraGrid192100\" is a software-only working copy...”

Thanks god we did an "odabr backup -snap" before we started the update and by running "odabr restore -snap -force" we were able to restore the "before" image of the operating system stuff. If you start odabr restore without the force option, you will get a warning "Clusterware patchlevel at backup time is not what you have now. ODABR can't be considered a tool to perform Grid Infrastructure downgrade" and the snap will not be restored.

Unfortunately, the grid infrastructure is not part of the odabr backup with newer ODA software releases, which means, we still had two Grid Infrastructure homes and the problem with the working copy as a software only working copy. 

Searching in support.oracle.com does not really help, searching for PRGO-1664 does not result in useful hits. But there is a small section in the note for the odabr tool (2466177.1). "if a GI update was in progress and you have done a restore", it may happen that the executable permissions are set wrong. 

To fix this, set the GRID_HOME to the old grid environment as user root and run [root@oda1]# GRID_HOME/crs/install/rootcrs.pl -lock

This will change the executable permissions - in our case in the 19.21 grid directory back and after this change patching including the odacli update-server -v 19.24.0.0.0 was running fine. 

Depending on the time you did the odabr backup, you need to start with patching the dcs stack or running the prepatch-report creation again. 

For the next ODA software versions, there is an enhancement request placed for the development, so the gateway ping check is then part of the prepatch report and will not lead to a corrupted GI update.

In the meantime it could be helpful to test manually, if the default gateway is pingable. To find out the gateways ip address you can run a "ip route show" command. It outputs a line "default via <IP Address>". This ip address is your gateway - a "ping <IP Address>" will (hopefully) work. If so, you will not run into this problem, we got. We will add the ping test to our manual check list, it is a 10 second work and it will save an hour work later on.


 

An underestimated feature: Using Oracle Remote Listener with Standard Edition 2 Licenses - Part 1 (Setup)

Even for small and medium-sized companies that work with Standard Edition 2 (SE2) databases from Oracle, high available and secure systems are a “must have”. How to install a remote listener is covered in this post (more details on high availability and security are covered in the next posts in the near future).

The databases are typically protected with additional software for high availability. The best-known provider is certainly dbvisit with its product “dbvisit Standby”, but there are other products available like the one from the company I work for - this alternative standby software is called robotron*Standby.

All software providing high availability for SE2  work more or less in the same way: a database is permanently recovered from the archived redo logs of the primary in a second environment and thus kept up to date. The applications or users connect (now with pluggable) databases via a TNSNames.ora entry, JDBC connect or easy connects. The host and the listener (ports) for the connections then typically change, if a switchover or failover happens.
The application database service (I really recommend DON'T use the default PDB service for your applications) remains the same in terms of name and is only available where the primary database is running and read/write opened, but there are still a few things to note.

Typically, connection entries for such highly available databases then look like this in TNSNames.ora (analogous in JDBC or with easyconnect):

application_service_name_ha=
 (DESCRIPTION=
   (ADDRESS_LIST=
     (LOAD_BALANCE=off)
     (FAILOVER=on)
        (ADDRESS=(host = proddbhost)(protocol = tcp)(port = 1521))
        (ADDRESS=(host = standbydbhost)(protocol = tcp)(port = 1521))
   )
  (CONNECT_DATA=
   (SERVICE_NAME=application_service_name.domain)
  )
 )

As the default values are not sufficient, additional parameters are often specified for the failover, e.g.TRANSPORT_CONNECT_TIMEOUT, CONNECT_TIMEOUT, RETRY_DELAY or RETRY_COUNT. I don't want to describe all of them here in detail, but in case you need them you have at least heard of them. 


However, the problems that result from this type of connecting to the application service of the database are always the same: you have to store both database hosts in the connect string and the connection failover to the other machine then always takes a moment.
Anyone using JDBC or easyconnect syntax will have a hard time with these complicated connection strings anyway. There are also some applications in which such connection strings cannot be captured due to the length of the string (especially if someone combines more than ONE Standby with a primary database). And last, but not least, the local listener at the database is "open for the whole world", right? So anyone who knows the database servers host name and the standard port can try to access the database.

Customers who use Enterprise Edition licenses with Data Guard are provided with a tool called Connection Manager (CMAN), this tool can be used as a proxy function/proxy listener, which does not only handles to forward connections, but also allows to secure connections/servers/services a little bit more. However, as the Connection Manager is only part of the Enterprise Edition database license, Standard Edition 2 customers must find another solution.

Remote listeners are the best possible solution to create some kind of connection manager by your own. These remote listeners are often installed on systems that have to run “anyway” for an application to work. Typically, these are the application servers, printer servers, etc., which are known to the end user and without an application is not or only partially available. In principle, however, any server that is itself highly available is suitable.

To be able to use a remote listener, a Full Oracle Database Client is required on this server, an Instant Client does NOT work for this. The client can be downloaded e.g. via https://edelivery.oracle.com.
As most listeners are compatible downwards for older releases, the highest version can theoretically be used here. Nevertheless, most of my customers want to use the same release that is also used for the database to stay in sync.

Be aware, that downloading from https://edelivery.oracle.com means one gets the BASE Release (e.g. 19.3.0.0) of the client. It can (and should) be patched (not covered in this blog post). 

After the download, the client must be installed on the server as an administrator - installing on Windows means setting  registry entries and new Windows services are created. Here are some hints:
It is important (for security reasons) that only used components are selected for installation. Setting “Custom” as installation type let one choose the right parts. Most administratos will probably select “Use Windows Built-in Account” for the Oracle Home/Windows Services user, which is fine. The software location is either based on the Oracle standard or it does make sense to use an application server software directory (e.g. create an oracle subdirectory at your application software base directory. It is sufficient to select the “Oracle Net Listener” - dependent components are installed automatically.
However, SQLDeveloper or ODBC driver or other things often required on the application server for installing or updating the application, so this can also be selected here.

Installer Selection Screen

After the installation is done, one can find new services in Windows - the only one that must run is the listener service. All other services installed by the client can be disabled or set to manual startup type.

The listener is started automatically, but there is no service registered, because the configuration is missing. 

Listener status without configuration

The Listener.ora file must be adapted so that a database can now register with the Listener.
As always, this file is located under ORACLE_HOME\network\admin. The listener line with EXTPROC1521 as listening endpoint is usually removed  for security reasons, it is not needed.
Now two parameters are added, whereby the “LISTENER” in the parameters stands for the name of the listener (which can of course also have a different name):


VALID_NODE_CHECKING_REGISTRATION_LISTENER = ON
REGISTRATION_INVITED_NODES_LISTENER = (<IP-Addresses>,<HostNames>,<IP-WITH-WILDCARDS>)

Typically, a 2-node environment for high availability would also include both nodes. The listener.ora on the application server appserver1.localdomain could look like this:

# listener.ora Network Configuration File: C:\app\oracle\product\19.0.0\client_1\NETWORK\ADMIN\listener.ora
# Generated by Oracle configuration tools.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = appserver1.localdomain)(PORT = 1521))
    )
  )

VALID_NODE_CHECKING_REGISTRATION_LISTENER = ON
REGISTRATION_INVITED_NODES_LISTENER = (server1.localdomain,server2.localdomain)

After the configuration is saved, the Listener configuration needs to be reloaded (lsnrctl reload listener or e.g. restart the service).

Now everything is prepared at the remote listener configuration to allow service registration from databases hosted on server1.localdomain and server2.localdomain, the database(s). The next step is to set the remote listener parameter at the CDB to reach the remote listener. In this example one can run on the database the following command:

alter system set remote_listener=' (ADDRESS = (PROTOCOL=TCP)(HOST=appserver1.localdomain)(PORT=1521))' scope=both;

Now the database registers itself with the remote listener.

 
In addition, the tnsnames.ora (or a jdbc connection string) can be apapted to connect to the database using the application server host listener instead of the database host listener:

jsotestpdb,jsotestpdb.localdomain =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = appserver1.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = jsotestpdb.localdomain)
    )
  )

The (application) client does not know anything about the database host at the time of connection.

Which impact does this have for standby databases? What about more "security" after setting up a remote listener successful? What happens with PDB Services while cloning to other databases?

Valid questions that will be answered in the next posts. One can move directly to blog post 2 - using a high available database (Standard Edition High Availability or Primary-Standby combination) with a remote listener.