Thursday, 20 March 2025

ORA-28414: specified keys are in HSM and OPEN_NO_MASTER_KEY when migrating to pluggable database (PDB)

The error as explained by Oracle mentions that the this happens when your key is stored in an external store - such as Hardware Security Module (HSM) or OKV

Of course, this blog is not related to HSM but the error received when handling TDE key stored in a software key store (Oracle wallet on a filesystem)

 I got this error when migrating a non-CDB to PDB on an existing container.

The original TDE keys on the system looks like below [I have masked the keys for obvious reasons ;)]

set lines 250
col key_id for a55
col creation_time for a35
col activation_time for a35
col creator_pdbname for a15
col creator_dbname for a15
col ACTIVATING_PDBNAME for a15
col ACTIVATING_DBNAME  for a15
select key_id, creator_dbname, ACTIVATING_DBNAME, creation_time, key_use, keystore_type
from v$encryption_keys order by creation_time;
 

KEY_ID                                                CREATOR_DBNAME  CREATION_TIME                     KEYSTORE_TYPE
----------------------------------------------------- --------------- --------------------------------- -----------------
XXXXXXXXXXXXXXXXXXXXXXXwAAAAAAAAAAAAAAAAAAAAAAAAAAAAA  PONPREM1_VUL    26-SEP-16 12.21.18.601380 +01:00  SOFTWARE KEYSTORE
YYYYYYYYYYYYYYYYYYYYYYYuAAAAAAAAAAAAAAAAAAAAAAAAAAAAA  PONPREM1_VUL    15-JUN-17 22.07.35.397077 +01:00  HSM

 

The key which is set as primary here was created in HSM and the wallet when migrated to SOFTWARE KEYSTORE has the key set as secondary.

I am skipping the actual plug-in steps here - where the PDB was plugged-in and TDE keys have already been imported once.

The first time this non-CDB was plugged in to the CDB. It could not set the TDE master key for the PDB. The wallet status remained in OPEN_NO_MASTER_KEY status, thus the error and thus this blog post for you.

ALTER SESSION SET CONTAINER=CDB$ROOT;
set linesize 180
column wrl_parameter format a40
column wrl_type heading 'Type' format a10
column status heading 'Status' format a20
column fully_backed_up heading 'Backed Up' format a15
column pdb_name heading 'PDB Name' format a15
select b.name pdb_name,wrl_type,wrl_parameter,status,wallet_type,keystore_mode,fully_backed_up from v$encryption_wallet a,v$containers b where a.con_id = b.con_id(+); 

PDB Name         Type         WRL_PARAMETER             Status             
--------------- ---------- ---------------------------  ----------     
CDB$ROOT         ASM         +DATAC1/PCDB_ATN/tde/      OPEN
PDB$SEED         ASM                                    OPEN
PONPREM1         ASM                                    OPEN_NO_MASTER_KEY 

 Any attempts to USE or SET key fails with ORA-28414: specified keys are in HSM

ADMINISTER KEY MANAGEMENT USE KEY '<key_id_for_software_keystore>' identified by "walletPass" WITH BACKUP USING 'USEKEY';

OR

ADMINISTER KEY MANAGEMENT SET KEY identified by "walletPass" WITH BACKUP USING 'SETKEY';
 

Resolution:

After troubleshooting, I was able to resolve the problem by following the below steps:

CDB creates services for a PDB, so if you have already attempted plugin operation - you will have to delete the services otherwise you cannot plugin the database with the same PDB name.

# Delete any PDB services from OCR, if added explicitly
srvctl remove service -d PCDB_ATN -s PONPREM1_P

# Delete existing services from PDB to prepare PDB for unplug/plug-in operation
ALTER SESSION SET CONTAINER=PONPREM1;
select 'exec dbms_service.delete_service(''' || name || ''');' src from dba_services;
exec dbms_service.delete_service('PONPREM1_P');
exec dbms_service.delete_service('PONPREM1_RO');

-- Internal services cannot be stopped/deleted, so no worries about these
exec dbms_service.stop_service('PONPREM1');
exec dbms_service.delete_service('PONPREM1');


# Export TDE keys from PDB
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY walletPass;
ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "walletPass" TO '/tmp/PONPREM1_pdb_enckeys.exp' IDENTIFIED BY walletPass;


# Unplug PDB and drop PDB while keeping the datafiles
ALTER SESSION SET CONTAINER=CDB$ROOT;
alter pluggable database PONPREM1 close instances=all;
alter pluggable database PONPREM1 unplug into '/tmp/PONPREM1_pdb.xml';
drop pluggable database PONPREM1 keep datafiles;

# Plug the PDB back to CDB
create pluggable database PONPREM1 using '/tmp/PONPREM1_pdb.xml' NOCOPY TEMPFILE REUSE;
alter pluggable database PONPREM1 open read write;


# Perform import of TDE keys again
ALTER SESSION SET CONTAINER=PONPREM1;
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY walletPass;
ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "SecRet" FROM '/tmp/PONPREM1_pdb_enckeys.exp' IDENTIFIED BY "walletPass" WITH BACKUP USING 'PRE_PONPREM1';


# Bounce the PDB and save state
ALTER PLUGGABLE DATABASE PONPREM1 CLOSE instances=all;
ALTER PLUGGABLE DATABASE PONPREM1 OPEN instances=all;
col name for a20
select name, inst_id, restricted, open_mode from gv$pdbs order by 1,2;
ALTER PLUGGABLE DATABASE PONPREM1 SAVE STATE instances=all;


# Add PDB services again to the CDB
srvctl add service -d PCDB_ATN -s PONPREM1_P -pdb PONPREM1 -r PCDB1,PCDB2 -tafpolicy BASIC -failovertype SELECT -l "PRIMARY"

The key is now set successfully automatically and wallet status is OPEN

ALTER SESSION SET CONTAINER=CDB$ROOT;
set linesize 180
column wrl_parameter format a40
column wrl_type heading 'Type' format a10
column status heading 'Status' format a20
column fully_backed_up heading 'Backed Up' format a15
column pdb_name heading 'PDB Name' format a15
select b.name pdb_name,wrl_type,wrl_parameter,status,wallet_type,keystore_mode,fully_backed_up from v$encryption_wallet a,v$containers b where a.con_id = b.con_id(+); 

PDB Name         Type         WRL_PARAMETER             Status             
--------------- ---------- ---------------------------  ----------     
CDB$ROOT         ASM         +DATAC1/PCDB_ATN/tde/      OPEN
PDB$SEED         ASM                                    OPEN
PONPREM1         ASM                                    OPEN

Enjoy.

Friday, 14 July 2023

Online Reorganization in Oracle 19c and 12cR2

We recently had to reorganize 7 bigfile tablespaces/datafiles which had almost 8-10TB free in each of them and only consumed less than 1TB of space.

Te datafiles contained weekly/daily partitions from a few tables and thus after housekeeping, it left lot of fragmented space.

We utilized online move feature of 19c for all the objects - TABLE, TABLE PARTITIONS, LOB,  LOB PARTITIONS, INDEXES

To answer "why we didn't use dbms_redefinition?" - the segments to reorganize for us were smaller and using alter ... move online update indexes was much quicker for us. The partitions of indexes/tables/lobs were scattered across multiple tablespaces and they belonged to single parent table(s). To reorganize/defragment individual datafiles, this approach worked efficiently for us.

Here are the steps.

Find HWM of datafile. Let's assume datafile belonged to tablespace=APP_DATA, file_id=8
Select max(block_id)*8192/1024/1024/1024 hwm_gb from dba_extents where FILE_ID = 8;

Find type of segments/objects contained in this datafile above 1 TB (this is a value I used, you may change depending upon how much you need in your datafile to be retained)

Select file_id, segment_type, count(distinct segment_name ||'.'||partition_name) as count
FROM DBA_EXTENTS
WHERE ((block_id + blocks-1)*8192)/1024/1024/1024/1024 > 1
AND FILE_ID = 8
group by file_id,segment_type
ORDER BY 1
/

Example:
   FILE_ID SEGMENT_TYPE            COUNT
---------- ------------------ ----------
         8 TABLE PARTITION           306
         8 LOB PARTITION               6
         8 LOBSEGMENT                  3
         8 TABLE                       2
         8 INDEX                       1


Now that you know all the object types, just prepare SQL statements as below. Feel free to adjust parallelism and other items based on your environment. We were on an Oracle Engineered System so hardware was not an issue.

-- TABLE
set pages 0 lines 300 long 999 echo off feed off head off
spool 01_tables.sql
select 'set time on timing on echo on' from dual;
set serveroutput on
begin
 for i in (select distinct owner as town, segment_name as segname
            FROM DBA_EXTENTS
            WHERE ((block_id + blocks-1)*8192)/1024/1024/1024/1024 > 1
            AND FILE_ID = 8
            AND SEGMENT_TYPE LIKE 'TABLE'
            ORDER BY 1)
 loop
  dbms_output.put_line('alter table ' || i.town || '.' || i.segname || ' move tablespace SCRATCH update indexes parallel 64;');
 
  for j in (select owner as idxown, index_name as indx from dba_indexes where owner=i.town and table_name=i.segname)
  loop
   dbms_output.put_line('alter index ' || j.idxown || '.' || j.indx || ' rebuild tablespace scratch online parallel 64;');
   dbms_output.put_line('alter index ' || j.idxown || '.' || j.indx || ' noparallel;');
  end loop;
 end loop;
end;
/
select 'select ''Completed> '' || systimestamp from dual;' from dual;
spool off;


-- TABLE PARTITION
set pages 0 lines 300 long 999 echo off feed off head off
spool 02_tab_parts.sql
select 'set time on timing on echo on' from dual;
select 'select ''Started> '' || systimestamp from dual;' from dual;
select distinct 'alter table ' || owner || '.' || segment_name
    || ' move partition ' || partition_name
    || ' tablespace SCRATCH update indexes online parallel 64;' src
FROM DBA_EXTENTS
WHERE ((block_id + blocks-1)*8192)/1024/1024/1024/1024 > 1
AND FILE_ID = 8
AND SEGMENT_TYPE LIKE '%TABLE%PARTITION%'
ORDER BY 1;
select 'select ''Completed> '' || systimestamp from dual;' from dual;
spool off;


-- LOB PARTITION
set pages 0 lines 300 long 999 echo off feed off head off
spool 03_lob_parts.sql
select 'set time on timing on echo on' from dual;
select distinct 'alter table ' || e.owner || '.' || l.table_name
    || ' move partition ' || l.partition_name
    || ' lob (' || l.COLUMN_NAME || ') store as'
    || case when l.securefile='YES' then ' securefile (tablespace SCRATCH) parallel 64 online update indexes;'
        else 'basicfile (tablespace SCRATCH) parallel 64 online update indexes;' end as src
FROM DBA_EXTENTS e, dba_lob_partitions l
WHERE ((e.block_id + e.blocks-1)*8192)/1024/1024/1024/1024 > 1
AND e.owner=l.table_owner AND e.segment_name=l.lob_name AND e.partition_name=l.LOB_PARTITION_NAME
AND e.FILE_ID = 8
AND e.SEGMENT_TYPE LIKE '%LOB%PARTITION%'
ORDER BY 1;
spool off;

-- LOBSEGMENT
set pages 0 lines 300 long 999 echo off feed off head off
spool 04_lobseg.sql
select 'set time on timing on echo on' from dual;
select distinct 'alter table ' || e.owner || '.' || l.table_name
    || ' move lob (' || l.COLUMN_NAME || ') store as'
    || case when l.securefile='YES' then ' securefile (tablespace SCRATCH) parallel 64 online update indexes;'
        else 'basicfile (tablespace SCRATCH) parallel 64 online update indexes;' end as src
FROM DBA_EXTENTS e, dba_lobs l
WHERE ((e.block_id + e.blocks-1)*8192)/1024/1024/1024/1024 > 1
AND e.owner=l.owner AND e.segment_name=l.SEGMENT_NAME
AND e.FILE_ID = 8
AND e.SEGMENT_TYPE LIKE 'LOBSEGMENT'
ORDER BY 1;
spool off;

-- INDEX
set pages 0 lines 300 long 999 echo off feed off head off
spool 05_indexes.sql
select 'set time on timing on echo on' from dual;
select 'select ''Started> '' || systimestamp from dual;' from dual;
select distinct 'alter index ' || owner || '.' || segment_name
    || ' rebuild tablespace scratch online parallel 64;' src
FROM DBA_EXTENTS
WHERE ((block_id + blocks-1)*8192)/1024/1024/1024/1024 > 1
AND FILE_ID = 8
AND SEGMENT_TYPE LIKE 'INDEX'
ORDER BY 1;
select 'select ''Completed> '' || systimestamp from dual;' from dual;
spool off;

Once you have the statements, create SCRATCH tablespace and move all the objects above 1TB into SCRATCH tablespace

Create bigfile tablespace SCRATCH datafile size 100m autoextend on maxsize 2000G;

Run SQL statements prepared in previous steps. I would suggest running them using nohup as:
nohup sqlplus / as sysdba @/<file_location>/file_name.sql &

Upon completion of script, check HWM of datafile an resize datafile to smallest possible and limit autoextend capacity

Select max(block_id)*8192/1024/1024/1024 hwm_gb from dba_extents where FILE_ID = 8;
alter database datafile 8 resize 1000G;
alter database datafile 8 autoextend on maxsize 1200G;

Now that we have our datafile shrinked, we can move back all the objects back into it.

To prepare move back statements, let's use the same scripts we prepared in earlier steps:

Eg:

sed 's/SCRATCH/APP_DATA/g' 01_tables.sql > rb_01_tables.sql 

Sed tool will search and replace staging tablespace name with application tablespace name. The script can be run again with nohup.

Once all the scripts are run to move data back into original tablespace. Use the query to check no segments are remaining in your tablespace=SCRATCH, file_id=20. If any remaining, then generate appropriate command using steps given earlier and move them back to original tablespace

select file_id, segment_type, count(distinct segment_name ||'.'||partition_name) as count
FROM DBA_EXTENTS
WHERE FILE_ID = 20
group by file_id,segment_type
ORDER BY 1
/

No rows selected.

Done.

Enjoy!!!

Friday, 4 June 2021

Get all the Privileges and Roles Granted to an Oracle User

It is often required to review all the privileges/roles assigned to an Oracle user. Below piece of SQL can be run via SQL developer or SQLPlus to get this information, just replace username below and it's done.

define usern = 'USERNAME'

Select 'SYS Privs> ' || privilege as privilege from dba_sys_privs where grantee='&&usern'
union
Select 'OBJ Privs> ' || privilege || ' on ' || owner || '.' || table_name from dba_tab_privs where grantee='&&usern' --and grantor<>'SYS'
union
Select 'Roles> ' || granted_role from dba_role_privs where grantee='&&usern'
union
Select 'COL Privs> ' || privilege from dba_col_privs where grantee='&&usern'
Order by 1;

Cheers!
Anurag

Monday, 7 September 2020

Enable or Disable Cron jobs Using Single Command

 A quick one, DBAs frequently need to disable/enable cron jobs. This can be often a time killer, so here are three commands to make life easier.

Only affects cron of current logged on user:

#-- Take a fresh backup of current cronjobs for reference
crontab -l > /tmp/cron.bkp

#-- Disable Cronjobs
crontab -l | sed 's/^/#/g' > /tmp/cron.edit; crontab /tmp/cron.edit; crontab -l

#-- Enable Jobs
crontab -l | sed 's/^#//g' > /tmp/cron.edit; crontab /tmp/cron.edit; crontab -l

Cheers!
Anurag

Friday, 9 August 2019

Find SQL Query Being Applied by Logical Standby or Applier Process

Came-up with this query to find which SQL is being applied on a logical standby as I frequently need to check what tables are undergoing heavy DML and may need to be manually re-instantiated.

Oracle view V$LOGSTDBY contains SERIAL# of applier process, this can be joined with V$SESSION to find out SQL ID of SQL query being applied.

Below is a raw attempt to get this information, suggestions are most welcome to make this more efficient.

Set pages 1000 lines 200
Col status for a80
Col type for a15
Col sql_text for a51
SELECT distinct l.Pid, l.Status, l.Serial#, l.Type, substr(sq.Sql_text,1,50) sql_text
FROM V$LOGSTDBY l, V$SQL sq, V$SESSION s
Where l.serial#=s.serial#
and (s.sql_id=sq.sql_id or s.prev_sql_id=sq.sql_id);

I would advise to crosscheck number of actual applier processes using below query in case you are getting multiple results when both sql_id and prev_sql are matched for a session. I used OR condition to make it more informative, but feel free to modify as you need.

SELECT * FROM V$LOGSTDBY;

Cheers!
Anurag

Logical Standby Recovery Stalled - Problem due to Sequence Reset or SEQ$ table

Changes:

A sequence was reset or decremented on your primary instance and now apply process is not able to progress on logical standby database or apply process is not applying logs on logical standby.

 

Symptoms:

ORA-16127 : stalled waiting for additional transactions to be applied
ORA-16121 : applying transaction with commit SCN 0xnnnn.nnnnnnnnn
ORA-16124 : transaction n n nnnnnnn is waiting on another transaction

Applier processes are halted/slowed down while working on below SQL query:
SQL ID : 4m7m0t6fjcs5x
Text:
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

In case you are wondering how to get SQL query, use SERIAL# column from V$LOGSTDBY view to get serial# of applier session, then obtain SQL_ID from V$SESSION using it. Get Full Query Here

There are probable bugs listed on My Oracle Support for this issue, would suggest to review these in case it helps you, in my case it did not helped.

Bug 9486060 - Many "update seq$" SQLs generated / spin on logical standby
Bug 6265559 - Apply spins after sequence altered at primary
Bug 9906543 - Logical standby Apply lag due to apply slaves synchronizing a sequence (superceded patch of above two)

Solution that worked:


1. Stop logical standby apply:
ALTER DATABASE STOP LOGICAL STANDBY APPLY;
OR Force Stop / Abort logical standby apply:
ALTER DATABASE ABORT LOGICAL STANDBY APPLY;

2. Disable Guard at Session level
ALTER SESSION DISABLE GUARD;

3. Drop Sequence that was reset on primary (drop from standby only!)
DROP SEQUENCE <owner>.<sequence_name>;

4. Start Apply and Enable Guard
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
ALTER SESSION ENABLE GUARD;

 This would discard all the changes automatically for the sequence and apply will resume.Wait for logical standby to be re synchronized.

Now Steps to Recreate the Sequence:
1. Stop logical standby apply

2. Disable Guard at Session level

3. Get DDL from Primary
Set Long 99999
Select dbms_metadata.get_ddl('SEQUENCE','&sequence_name','&owner') From Dual;

4. Create Sequence
Use DDL statement from above statement's output to create sequence on standby with correct START WITH value

5. Start Apply and Enable Guard

Cheers!
Anurag

#sequence #reset  #ORA-16127 #ORA-16121 #ORA-16124 #logical #standby #seq$

Monday, 15 July 2019

Samba Share Mount Fails or SMB error NT_STATUS_LOGON_FAILURE STATUS_LOGON_FAILURE

This problem has been troubling me a lot for days with below error on a live environment (had to mask original details below, but feel free to ask if any questions).

When attempting to list contents on a remote filesystem, smbclient works fine. However, mounting this filesystem fails with error.


# smbclient //192.168.1.11/Test -U svc_test -W test.domain
Enter TEST.DOMAIN\svc_test's password:
Try "help" to get a list of possible commands.
smb: \>
smb: \> dir
  .                                   D        0  Fri Feb  9 20:59:37 2018
  ..                                  D        0  Fri Feb  9 20:59:37 2018
  TESTDIR                             D        0  Tue Jul  9 22:55:05 2019

                222297343 blocks of size 4096. 112725217 blocks available


# mount -vt cifs -o username=svc_test,domain=testdomain,password=abc*33_320 //192.168.1.11/Test /Test 
mount: //192.168.1.11/Test is write-protected, mounting read-only
mount: cannot mount //192.168.1.11/Test read-only

While above command fails to mount the share, OS log (/var/log/messages) records below errors.

CIFS VFS: Send error in SessSetup = -13
CIFS VFS: cifs_mount failed w/return code = -13
Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE


If I choose a different/higher version (other than option vers=1.0) the error message would change to

CIFS VFS: Send error in SessSetup = -13
CIFS VFS: cifs_mount failed w/return code = -13
Status code returned 0xc000006d STATUS_LOGON_FAILURE



While the attempts failed, the account gets locked automatically due to MS Windows' security and I would end up receiving NT_STATUS_ACCOUNT_LOCKED_OUT error and had to reach-out to admins to unlock this from Windows host.

If you have tried everything else related to this error --
# Share level Permissions
# Linux level Permissions
# Global Config Parameters
# SMB and NMB Process Status (should be healthy and running ;))
# Able to list share using smbclient command
# Misc troubleshooting
# Still not able to mount from script, command-line etc, escaped special characters too

Solution:
Mount command in my case was not able to parse complete password as it is, the reason was probably - presence of an asterisk in password broke the command in between.

The password was reset by Windows admin to a simpler one (excluding any of  $ % * @ characters) and mount command was able to complete.

Cheers!
Anurag