Sunday, 21 April 2019

Query to Shrink Datafiles

Below query can be used to generate ALTER statements for resizing datafiles that have either overgrown or were added with large initial size and were never used.

set lines 150 pages 1000
col command for a150
SELECT ceil( blocks*(a.BlockSize)/1024/1024) "Current Size",
   ceil( (nvl(hwm,1)*(a.BlockSize))/1024/1024 ) "Smallest Poss.",
   ceil( blocks*(a.BlockSize)/1024/1024) -
   ceil( (nvl(hwm,1)*(a.BlockSize))/1024/1024 ) "Savings",
   'Alter Database Datafile '''|| file_name || ''' Resize ' ||
      ceil((nvl(hwm,1)*(a.BlockSize))/1024/1024/100)*100  || 'M;' "Command"
FROM (SELECT a.*, p.value BlockSize FROM dba_data_files a
JOIN v$parameter p ON p.Name='db_block_size') a
LEFT JOIN (SELECT file_id, max(block_id+blocks-1) hwm FROM dba_extents GROUP BY file_id ) b
ON a.file_id = b.file_id
WHERE ceil( blocks*(a.BlockSize)/1024/1024) - ceil( (nvl(hwm,1)*(a.BlockSize))/1024/1024 )
   > 1024 /* Minimum MB it must shrink by to be considered. */
ORDER BY "Savings" Desc;


Cheers,
Anurag

ORA-39097 ORA-39065 ORA-12805 During Export (expdp)

There are couple of reasons you may be facing below errors during export of database/schema/tables using datapump (expdp).

ORA-39097: Data Pump job encountered unexpected error -12805
ORA-39065: unexpected master process exception in DISPATCH
ORA-12805: parallel query server died unexpectedly


One of a common problem in RAC environment occurs when using a non-shared location and using parallelism. In this case you should simply use cluster=n parameter in your export command which will run export only on node local to expdp command.

However, I experienced this error even after parameters were correctly set and the job started failing all of sudden.

Above errors were accompanied with below messages in alertlog as well.

IPC Send timeout detected. Receiver ospid 22767 [oracle@********* (PPA6)]
Errors in file /u01/app/oracle/diag/rdbms/orcl/ORCL/trace/ORCL1_ppa6_22767.trc:

IPC Send timeout detected. Sender: ospid 43074 [oracle@********* (DM00)]
Receiver: inst 2 binc 2 ospid 99228


Upon investigation I observed below two sessions from SYS user. Since I was running export using SYS user, I filtered this with user SYS

Select a.inst_id, a.sid, a.serial#, a.username, substr(a.program,1,10) program, a.status, a.logon_time, substr(a.event,1,25) event, substr(machine,1,25) machine
from gv$session a
where a.username='SYS'
and a.sid <> (select sys_context('USERENV','SID') from dual)
order by a.logon_time;

  SID SERIAL# INST USER PROGRAM    STATUS   LOGON_TIME         SQL_TEXT                                 EVENT
----- ------- ---- ---- ---------- -------- ------------------ ---------------------------------------- -----------------
 1489   22849    1 SYS  ude@****** ACTIVE   21-APR-19 08:05:27 BEGIN          SYS.DBMS_DATAPUMP.GET_STA PL/SQL lock timer
 1489   22849    1 SYS  ude@****** ACTIVE   21-APR-19 08:05:27 SELECT operation, master_id, state, flag PL/SQL lock timer


For some reason (probable bug or hanged process) above processes caused failure in expdp with ORA-39065 and ORA-12805. The session was killed and export was able to complete without errors.

SQL> Alter system kill session '1489,22849' immediate;
System altered.


By the time I wrote this blog I had not seen a similar documented issues on Oracle support, hope this helps someone.

Thanks!
Anurag

Sunday, 14 April 2019

AUD$ Purge Script (Audit Table Maintenance) - Oracle

Here I share a direct-to-use AUD$ table maintenance script. For detailed information please visit my other post on table maintenance.

Pre-requisite:

Clean-up should be initialized using DBMS_AUDIT_MGMT package. Below is a sample code to initialize for 40 days (960 hours).

BEGIN
DBMS_AUDIT_MGMT.INIT_CLEANUP(
 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
 DEFAULT_CLEANUP_INTERVAL => 24*40 /* hours x days */
);
END;
/


In case you see this error:
ORA-46267: Insufficient space in 'SYSAUX' tablespace, cannot complete operation

Change the default tablespace to 'USERS' or any other tablespace which has sufficient space.
UPDATE dam_config_param$ SET string_value='USERS' WHERE audit_trail_type#=1 AND param_id=22;
Commit;


Change it back after INIT_CLEANUP is successful
UPDATE dam_config_param$ SET string_value='SYSAUX' WHERE audit_trail_type#=1 AND param_id=22;
Commit;

Script:

Below script would
# Create a job using DBMS_AUDIT_MGMT.CREATE_PURGE_JOB to purge using LAST_ARCH_TS

# Modify job to start from 15th April 2019 at 01:00 AM and run every 24 hours
# Modify job to Set LAST_ARCH_TS (last archive timestamp) to purge audit data older than 40 days
# Add notification to alertlog when job starts and completes

BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
  AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
  AUDIT_TRAIL_PURGE_INTERVAL => 24,
  AUDIT_TRAIL_PURGE_NAME => 'JOB_PURGE_AUDIT_TRAIL',
  USE_LAST_ARCH_TIMESTAMP => TRUE
);
DBMS_SCHEDULER.SET_ATTRIBUTE (
  name      => 'JOB_PURGE_AUDIT_TRAIL',
  attribute => 'START_DATE',
  value => TO_TIMESTAMP('15-APR-2019 01:00','DD-MON-YYYY HH24.MI'));
DBMS_SCHEDULER.SET_ATTRIBUTE (
  name      => 'JOB_PURGE_AUDIT_TRAIL',
  attribute => 'JOB_ACTION',
  value => 'BEGIN
   /* AUD$ Purge Script - Anurag T. - 14/Apr/2019 */
   SYS.DBMS_SYSTEM.KSDWRT(2, ''Starting Audit Data Purge - At'');
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(15, TRUE);
   SYS.DBMS_SYSTEM.KSDWRT(2, ''Completed Audit Data Purge - At'');
END;');

DBMS_AUDIT_MGMT.SET_PURGE_JOB_STATUS(
  audit_trail_purge_name      => 'JOB_PURGE_AUDIT_TRAIL',
  audit_trail_status_value    => DBMS_AUDIT_MGMT.PURGE_JOB_ENABLE);
END;
/


Verify:

SELECT * FROM DBA_AUDIT_MGMT_CLEANUP_JOBS;
SELECT * FROM DBA_AUDIT_MGMT_CLEAN_EVENTS;
SELECT * FROM DBA_AUDIT_MGMT_LAST_ARCH_TS;


Defragment Audit Tables (Optional)

In case size of your AUD$ of FGA_LOG$ table is still large after purge, it can be defragmented in 11g onwards using below commands or using alter table enable row movement and alter table shrink in older releases

First 4 commands move all tables to another tablespace and subsequent 4 commands will move them back to original tablespace.

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   audit_trail_location_value  => 'SYSAUX');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
   audit_trail_location_value  => 'SYSAUX');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   audit_trail_location_value  => 'SYSAUX');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   audit_trail_location_value  => 'SYSAUX');
END;
/


Done! Cheers.
Anurag