Oracle In-Memory Base-Level (Part 2: How to enable the base level and find candidates for In-Memory Base-Level)

The easiest thing is to enable the In-Memory Base level. There are only two parameters, one needs to set at the spfile/pfile. 

The first is to set the INMEMORY_FORCE parameter to "BASE_LEVEL". The default value is "default", which means, this is the $$$ option of the Enterprise Edition database. The BASE_LEVEL is cost free, but also only included with the Enterprise Edition license.
The good thing - as soon as the database is started with the parameter set to BASE_LEVEL, all features, that are not allowed to use with this level are DISABLED! That means you can't use them and therefore you don't run into any license issues by accidently use things that you don't have licensed (there are enough examples for the database where something can be used "technically", but it is "forbidden" by license). This is why I would suggest to set INMEMORY_FORCE at all EE databases to BASE_LEVEL - even if you don't want to use it, it will prevent anybody from using the $$$ option!

The second parameter to set is the size of the In-Memory Column Store INMEMORY_SIZE. Minimum is 100 MB, maximum is 16 GB. If you try to set it higher, it will not work until you change INMEMORY_FORCE back to "default". 

Remark: After setting these parameters you need to restart the database! (by the way, increasing the INMEMORY_SIZE can be done by alter system scope memory/both; shrinking only with scope spfile and another restart).

To understand, which objects can be candidates, we first must understand, for which scenarios the In-Memory Base-Level can be helpful. 

One of the main things is that In-Memory is absolutely transparent for applications. If there is data missing at the In-Memory Column Store, the traditional Buffer Cache is used to get the data. As there is a compression used for the In-Memory Cache (just another name for In-Memory Column Store), there MAY fit more data in the In-Memory Cache than in the Buffer Cache (I will highlight that at another post). 

Where is the In-Memory Cache helpful for getting a better performance? To get an answer for this question we consult the documentation:

  • Faster queries, scans, joins and aggregates, especially with filters using e.g. <, >, IN, =
  • Get only a few columns from a table/materialized view with a big number of columns
  • Get LOB columns with SQL operators
  • SQLs where bloom filters can be used to eliminate data sets which are not scanned, e.g. small dimension tables which filter large tables, tyically primary key-foreign key relationships
  • Queries with aggregations, like group by, vector group by
  • Queries with contains() in xml or text columns or JSON_textcontains() for json documents

As a first step, ALL objects which do fit in one of the above categories are possible candidates for the IN_MEMORY Cache. That means, nearly all tables, table partitions, materialized views or expressions could be possible candidates (indexes, e.g., can't). For a running application this isn't really helpful.

A bit more lucky are people that do have licensed the Diagnostic- and Tuning Pack for the database. If so, there is a little tool available for download at support note 1965343.1.

The Database In-Memory Advisor was build for the "normal" In-Memory option. It installs some program code and is analyzing the activity of the database. One can specify the maximum in-memory cache size as input. The activity is e.g. analyzed using the Active Session History view (this is why Diagnostics- and Tuning-Packs are needed). 

The HTML report one gets then shows the objects that are candidates for In-Memory, the compression type and the expected improvement of the performance. If one is interested in the In-Memory Advisor please have a look at this document. There is also a really good youtube video existing, which can assist you. Starting at roundabout 27:14 you can see the html output of the advisor. What one must keep in mind: The In-Memory Advisor is using ALL compression modes (also at the script level it generates to populate the objects), but for BASE_LEVEL only "memcompress for query low" is available. A good solution is to run the In-Memory Advisor with the cache size a lot less than which is really available. At least, one gets a good impression about possible candidates.

And if one does not have diagnostics and tuning pack licensed? Do you like to dig into the data dictionary performance view? Yes, there are 3rd-party tools available for this. But no, I wouldn't recommend to use them (at least the ones I have seen yet). This means the most helpful thing is to look at the known performance views like v$sql_plan, v$sql, etc.. Check these views for execution statistics and execution plans and try to get the "worst" statements regarding the following criterias:

  • Find the statements with the highest amount of time spent for the application and find out the objects they are using
  • Take a look at recurrend joins at the statements (as they can be candidates for expressions)
  • Check statements with a lot of I/O (Full Table Scans or a high number of Index blocks) as well as high Buffer Cache reads
  • Separate, highly frequently used table partitions 
  • Frequently used materialized views
  • If a table object is is already populated In-Memory, one can use the "In-Memory Expression Capture" to find hottest expressions.

An example. I have created a (ok, logically really senseless) select at the environment I have created (see Part 1 of the post series) for test. It is running more than 27 minutes and it was easy to find that statement at the v$ views. 

We now return to the criterias above "where is In-Memory helpful" and "finding the worst statements" and do look at the execution plan. 


Marked yellow we do see table accesses full at employees (and this is a huuuuuuuge table with a lot of columns) and a high number of rows selected from locations. Marked blue we do see something that looks good at a first sight for jobs and departments - there is an index UNIQUE scan with an additional table access. Having a look at the select, one can see we are hitting perfectly the criteria "filters like IN or <, >, ...". 

There are also joins (line 5-7) for the different result sets of the subselects. That means there could be a candidate for an "In-Memory expression".

If you do have such kind of execution plans, check the predicate information for each line with a star at the execution plan. At the next picture (showing the predicate information) it is a lot more easy to understand, what the optimizer is doing in the background using the "filter" and "access" criterias. Combined with the objects at the lines of the execution plans we can find that employees is a good candidate (due to line 8 and 4), locations is a good candidate (due to the number of rows and the filter criteria found for line 13), the filter of line 15 shows departments as a good candidate, and so on.

Stay tuned for the next posts where I will show how to populate the IM cache and how the execution plan and the predicate information helps to check the success of populating the objects into the IM cache. In addition, I will show why objects at the IM cache may be bigger in size as they are at the datafile level and which views may be useful for having an eye on the IM cache.

DOAG Conference 2020 - Presentation is online

I have already uploaded the pdf of my presentation "Datenbankupgrades mit Autoupgrade über 2 Nodes" at my Slide Download Center (you can find that on the right side or using this Link). The presentation is in german, but if you have any question, write a comment / contact me.


Autoupgrade.jar throws error at analyze phase - UPG-1316

While testing the newest versions of autoupgrade.jar, I was running in an error (on my MS Windows environment, I haven't tested that for Linux, etc. yet). 

The error itself is shown as upg-1316, unexpected exception error:



I have seen two different causes, one is for checking disk space without using a Fast Recovery Area. Then you get this "Min_Archive_Dest_Size" error:

 
The same problem arises, when autoupgrade is doing a check for the FRA size - "Disk_Space_For_Recovery_Area".

Versions that are affected are 19.9.2 and 19.10.0, the issue does not happen with 19.7.3, 19.7.5 and 19.8.1 at the same database / server.

After investigating the issue with the Development, the issue was fixed within 4 days at the newest development version, which I have tested successfully. I don't know, when this version will be released and with which version number. If you do hit the problem, stay at 19.8.1 or go for a newer version than 19.10.0, as soon as it is released.
 
@Dev: Thank you for the quick fix!