Oracle Database Release 18c New Features

New Version Numbering for Oracle Database: Version numbers use a three-digit format consisting of: Year.Update.Revision

  • Year is the last two digits of the year a release is delivered: 18stands for year 2018: Oracle Database 18c. Update tracks release update (RU) or Beta releases. Revision tracks the associated release update revision (RUR) level (0, 1, 2). 18.1.0 uses new version numbering for all RUs and RURs. Production databases start by using latest RU for fastest stabilization. When production DB stability is achieved, switch over to RURs seamlessly.
  • Using DBCA to Clone PDBs: In Oracle Database 12c, DBCA enables you to create a new PDB. The new PDB is created as a clone of the CDB seed. In Oracle Database 18c, DBCA enables you to create a new PDB as a clone of an existing PDB, and not necessarily from the CDB seed. Clones PDB in hot mode – Creates the datafiles directory for the new PDB – Opens the new PDB.

Oracle Database 18c introduces the CDB Fleet feature. CDB Fleet aims to provide the underlying infrastructure for massive scalability and centralized management of many CDBs.

  • The maximum number of PDBs in a CDB is 4096 PDBs. A CDB fleet can hold more than 4096 PDBs.
  • Different PDBs in a single configuration require different types of servers to function optimally. Some PDBs might process a large transaction load, whereas other PDBs are used mainly for monitoring. You want the appropriate server resources for these PDBs, such as CPU, memory, I/O rate, and storage systems.
  • Each CDB can use all the usual database features for high availability, scalability, and recovery of the PDBs in the CDB, such as Real Application Clusters (RAC), Data Guard, RMAN, PITR, and Flashback.
  • PDB names must be unique across all CDBs in the fleet. PDBs can be created in any CDB in the fleet, but can be opened only in the CDB where they physically exist.

PDB Snapshot Carousel: In Oracle Database 18c, when you create a PDB, you can specify whether it is enabled for PDB snapshots. A PDB snapshot is an archive file (.pdb) containing the contents of the copy of the PDB at snapshot creation.
PDB snapshots allow the recovery of PDBs back to the oldest PDB snapshot available for a PDB. This feature extends the recovery beyond the flashback retention period that necessitates database flashback enabled. The example in the slide shows a situation where you have to restore PDB1 back to Wednesday. A use case of PDB snapshots is reporting on historical data. You might create a snapshot of a sales PDB at the end of the financial quarter. You could then create a PDB based on this snapshot so as to generate reports from the historical data. Every PDB snapshot is associated with a snapshot name and the SCN and timestamp at snapshot creation. The MAX_PDB_SNAPSHOTS database property sets the maximum number of PDB snapshots for each PDB. The default and allowed maximum is 8. When the maximum number is reached for a PDB, and an attempt is made to create a new PDB snapshot, the oldest PDB snapshot is purged. If the oldest PDB snapshot cannot be dropped because it is open, an error is raised. You can decrease this limit for a given PDB by issuing an ALTER DATABASE statement specifying a max number of snapshots. If you want to drop all PDB snapshots, you can set the limit to 0.

Switching Over a Refreshable Cloned PDB: In Oracle Database 18c, after a user creates a refreshable clone of a PDB, the roles can be reversed. The refreshable clone can be made the primary PDB which can be opened in read/write mode while the primary PDB becomes the refreshable clone. The ALTER PLUGGABLE DATABASE command with the SWITCHOVER clause must be executed from the primary PDB. The refresh mode can be either MANUAL or EVERY <refresh interval> [MINUTES | HOURS]. REFRESH MODE NONE cannot be specified when issuing this statement. After the switchover operation, the primary PDB becomes the refreshable clone and can only be opened in READ ONLY mode. During the operation, the source is quiesced and any redo generated from the time of the last refresh is applied to the destination to bring it current. The database link user also has to exist in the primary PDB if the refreshable clone exists in another CDB.

Using RMAN Enhancements: In Oracle Database 18c, you can transport the existing backups and backed up archive log files of the non-CDB and reuse them to restore and recover the new PDB. The backups transported from the non-CDB into the PDB are called preplugin backups. Transporting the backups and backed up archive log files associated to a non-CDB before migration requires the following steps:

1.The following new DBMS_PDB.exportRmanBackup procedure must be executed in the  non-CDB opened in read/write mode. This is a mandatory step for non-CDB migration. The procedure exports all RMAN backup metadata that belongs to the non-CDB into its own dictionary. The metadata is transported along with the non-CDB during the migration.

2. Use dbms_pdb.describe to generate an XML metadata file from the non-CDB describing the structure of the non-CDB with the list of datafiles.

3. Archive the current redo log file required for a potential restore/recover using preplugin backups.

4. Transfer the data files, backups, and archive log files to the target CDB.

5. Use the XML metadata file during the plugging step to create the new PDB into the CDB.

6. Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete unnecessary metadata from the PDB SYSTEM tablespace.

7. Open the PDB. When the PDB is open, the exported backup metadata is automatically copied into the CDB dictionary, except the current redo log file archived in step 3. Catalog the archived redo log file as one of the preplugin archived logs. Because the backups for the PDB are now available in the CDB, they can be reused to recover the PDB.

Duplicating On-Premise Encrypted CDB as Cloud Encrypted CDB: If the source database already contains encrypted tablespaces, the DUPLICATE must have access to the TDE master key of the source (TARGET) database because the clone instance needs to decrypt the datafiles before re-encrypting them during the restore operation. In this case, the keystore has to be copied from the on-premise CDB to the clone instance before starting the DUPLICATE and must be opened.
The DUPLICATE command allows the new ‘AS ENCRYPTED’ clause to restore the CDB with encryption.

Performance Improvement: Oracle Database 18c introduces the Automatic In-Memory (AIM) feature. The benefits of configuring Automatic In-Memory (AIM) are:

  • Ease of management of the IM store: Management of the IM column store for reducing memory pressure by eviction of cold IM segments involves significant user intervention. AIM addresses these issues with minimal user intervention.
  • Improved performance: AIM ensures that the “working data set” is in the IM column store at all times. The working data set is a subset of all the IM enabled segments that is actively queried at any time. The working data set is expected to change with time for many applications. The working data set (or actively queried IM segments) contains a hot portion that is active and a cold portion that is not active. For data ageing applications, the action would be to remove cold IMCUs from the IM column store.

With AIM, the DBA need not define IM priority attributes or ADO IM policies on IM segments. AIM automatically reconfigures the IM column store by evicting cold data out of the IM column store and populating the hot data. The unit of data eviction and population is an on-disk segment. AIM uses the heat map statistics of IM-enabled segments together with user-specified configurations to decide the set of objects to evict under memory pressure.

Memoptimized Rowstore: Smart devices connected to the Internet that have the ability to send and receive data require support for fast ingest and query rates for thousands of devices. The Memoptimized Rowstore feature is meant to provide high-speed streaming of single-row inserts and very fast lookups to key-value type data. The feature works only on tables that have PRIMARY KEY integrity constraint enabled. To provide the speed necessary to service thousands of devices, the data is aggregated and streamed to the database through the trusted clients. The fast query part of the Memoptimized Rowstore feature allows access to existing rows through a new hash index structure and pinned database blocks. Oracle Database supports ingest and access of row-based data in a fraction of the time that it takes for conventional SQL transactions. With the ability to ingest high-speed streaming of input data and the use of innovative protocols and hash indexing of key-value pairs for lookups, the Memoptimized Rowstore feature significantly reduces transaction latency and overhead, and enables businesses to deploy thousands of devices to monitor and control all aspects of their business.

Sharding Enhancements: Oracle Database 18c introduces the user-defined sharding method that lets you explicitly specify the mapping of data to individual shards. It is used when, because of performance, regulatory, or other reasons, certain data needs to be stored on a particular shard, and the administrator must have full control over moving data between shards. Another advantage of user-defined sharding is that, in case of planned or unplanned outage of a shard, you know exactly what data is not available. The disadvantage of user-defined sharding is the need for the database administrator to monitor and maintain balanced distribution of data and workload across shards. With user-defined sharding, a sharded table can be partitioned by range or list. There is no tablespace set defined for user-defined sharding. Each tablespace has to be created individually and explicitly associated with a shardspace. A shardspace is a set of shards that store data that corresponds to a range or list of key values. As with system-managed sharding, tablespaces created for user-defined sharding are assigned to chunks. However, no chunk migration is automatically started when a shard is added to the SDB. The user needs to execute the MOVE CHUNK command for each chunk that needs to be migrated. GDSCTL CREATE SHARDCATALOG supports user-defined sharding with the value USER in the –sharding option. The SPLIT CHUNK command, which is used to split a chunk in the middle of the hash range for system-managed sharding, is not supported for user-defined sharding. You must use the ALTER TABLE SPLIT PARTITION statement to split a chunk.

SQL Exadata-Aware Profile: On Exadata systems, the cost of smart scans is dependent on three system statistics:

  • I/O Seek Time
  • Multi-Block Read Count (MBRC)
  • I/O Transfer Speed

The values of these system statistics are usually different on Exadata as compared to non-Exadata and can influence which execution plan would be optimal. In Oracle Database 18c, SQL Tuning Advisor executes a new analysis to determine if any of these system statistics are not up to date. If any of the system statistics are found to be stale and gathering them improves the performance of the SQL being tuned, this will be recommended via a SQL profile called an Exadata-aware SQL profile. Accepting such a profile impacts performance of only the SQL being tuned and not any of the other SQLs. This is consistent with the existing behavior of a SQL profile.

Exadata-aware SQL profile.
execute
dbms_sqltune.accept_sql_profile(task_name => ‘TASK_1001,
task_owner => ‘EXAUSR’, replace => TRUE);

Querying Inlined External Tables: Oracle Database 18c offers the possibility to query an external table without creating a persistent object for the external table in the data dictionary. Compared to the example in the previous slide, the first step can be skipped. In this case, the query inlines the external table. For inlining an external table, the EXTERNAL keyword along with other information must be provided. The same external table parameters and any user-specified columns that must be specified in the CREATE TABLE syntax must also be specified when inlining the external table in a query. This information includes a list of external table columns defining the table, access driver type and external table parameters. A REJECT limit can be specified as an option. Note that the MODIFY keyword must be omitted when inlining an external table because the external table is not referenced in the data dictionary. External table metadata exist only for the query duration. It is created during query compilation and purged when the query has been aged out of the cursor cache. The user querying the inlined external table must have the READ privilege on the directory object containing the external data, and the WRITE privilege on the directory objects containing the bad, log and discard files.

Database In-Memory Support for External Tables: Oracle Database 18c enables the population of data from external tables into the in-memory column store. This allows population of data that is not stored in Oracle Database. This can be valuable when you have other external data stores and you want to perform advanced analytics on that data with Database In-Memory. This can be particularly valuable when the external data needs to be queried repeatedly. You can avoid multiple accesses of the external storage and the queries can use the features of Database In-Memory multiple times. Data from external sources with ORACLE_LOADER and ORACLE_DATAPUMP access types can be summarized and populated into the in-memory column store where repeated, ad-hoc analytic queries can be run that might be too expensive to run on the source data. The in-memory external tables also benefit from in-memory expressions. You can set the INMEMORY attribute and its correlated EMCOMPRESS attribute when creating and altering an external table. If the external table is partitioned, all individual partitions are defined as in-memory segments. The ability to exclude certain columns is not yet implemented. Querying an in-memory external table requires the QUERY_REWRITE_INTEGRITY parameter in the session to be set as STALE_TOLERATED and if updates of the external file occur, either repopulation of the in-memory segment with the BMS_INMEMORY.REPOPULATE procedure or altering the in-memory table as NO INMEMORY and resetting it as MEMORY is required.

Sharding Support for JSON, LOBs, and Spatial Objects: LOB is a widely used, first class data type in Oracle Database. Release 18c enables the use of LOBs, JSON, and spatial objects in an Oracle Sharding environment, which is useful for applications that use these data types where storage in sharded tables would facilitate business requirements. JSON operators that generate temporary LOBs, large JSON documents (those that require LOB storage), spatial objects, index and operators, and persistent LOBs can be used in an Oracle Sharding environment. The following interfaces are new or changed as part of this feature. This release enables JSON operators that generate temporary LOBs, large JSON documents (those that require LOB storage), spatial objects, index and operators, and persistent LOBS to be used in a sharded environment. In a system-managed sharded database, you must specify a tablespace set for the LOBs, and then include it in the CREATE SHARDED TABLE statement for the parent table as shown in the examples above.

 

 

Leave a Reply

Your email address will not be published.