Friday, 9 August 2019

Find SQL Query Being Applied by Logical Standby or Applier Process

Came-up with this query to find which SQL is being applied on a logical standby as I frequently need to check what tables are undergoing heavy DML and may need to be manually re-instantiated.

Oracle view V$LOGSTDBY contains SERIAL# of applier process, this can be joined with V$SESSION to find out SQL ID of SQL query being applied.

Below is a raw attempt to get this information, suggestions are most welcome to make this more efficient.

Set pages 1000 lines 200
Col status for a80
Col type for a15
Col sql_text for a51
SELECT distinct l.Pid, l.Status, l.Serial#, l.Type, substr(sq.Sql_text,1,50) sql_text
FROM V$LOGSTDBY l, V$SQL sq, V$SESSION s
Where l.serial#=s.serial#
and (s.sql_id=sq.sql_id or s.prev_sql_id=sq.sql_id);

I would advise to crosscheck number of actual applier processes using below query in case you are getting multiple results when both sql_id and prev_sql are matched for a session. I used OR condition to make it more informative, but feel free to modify as you need.

SELECT * FROM V$LOGSTDBY;

Cheers!
Anurag

Logical Standby Recovery Stalled - Problem due to Sequence Reset or SEQ$ table

Changes:

A sequence was reset or decremented on your primary instance and now apply process is not able to progress on logical standby database or apply process is not applying logs on logical standby.

 

Symptoms:

ORA-16127 : stalled waiting for additional transactions to be applied
ORA-16121 : applying transaction with commit SCN 0xnnnn.nnnnnnnnn
ORA-16124 : transaction n n nnnnnnn is waiting on another transaction

Applier processes are halted/slowed down while working on below SQL query:
SQL ID : 4m7m0t6fjcs5x
Text:
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

In case you are wondering how to get SQL query, use SERIAL# column from V$LOGSTDBY view to get serial# of applier session, then obtain SQL_ID from V$SESSION using it. Get Full Query Here

There are probable bugs listed on My Oracle Support for this issue, would suggest to review these in case it helps you, in my case it did not helped.

Bug 9486060 - Many "update seq$" SQLs generated / spin on logical standby
Bug 6265559 - Apply spins after sequence altered at primary
Bug 9906543 - Logical standby Apply lag due to apply slaves synchronizing a sequence (superceded patch of above two)

Solution that worked:


1. Stop logical standby apply:
ALTER DATABASE STOP LOGICAL STANDBY APPLY;
OR Force Stop / Abort logical standby apply:
ALTER DATABASE ABORT LOGICAL STANDBY APPLY;

2. Disable Guard at Session level
ALTER SESSION DISABLE GUARD;

3. Drop Sequence that was reset on primary (drop from standby only!)
DROP SEQUENCE <owner>.<sequence_name>;

4. Start Apply and Enable Guard
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
ALTER SESSION ENABLE GUARD;

 This would discard all the changes automatically for the sequence and apply will resume.Wait for logical standby to be re synchronized.

Now Steps to Recreate the Sequence:
1. Stop logical standby apply

2. Disable Guard at Session level

3. Get DDL from Primary
Set Long 99999
Select dbms_metadata.get_ddl('SEQUENCE','&sequence_name','&owner') From Dual;

4. Create Sequence
Use DDL statement from above statement's output to create sequence on standby with correct START WITH value

5. Start Apply and Enable Guard

Cheers!
Anurag

#sequence #reset  #ORA-16127 #ORA-16121 #ORA-16124 #logical #standby #seq$

Monday, 15 July 2019

Samba Share Mount Fails or SMB error NT_STATUS_LOGON_FAILURE STATUS_LOGON_FAILURE

This problem has been troubling me a lot for days with below error on a live environment (had to mask original details below, but feel free to ask if any questions).

When attempting to list contents on a remote filesystem, smbclient works fine. However, mounting this filesystem fails with error.


# smbclient //192.168.1.11/Test -U svc_test -W test.domain
Enter TEST.DOMAIN\svc_test's password:
Try "help" to get a list of possible commands.
smb: \>
smb: \> dir
  .                                   D        0  Fri Feb  9 20:59:37 2018
  ..                                  D        0  Fri Feb  9 20:59:37 2018
  TESTDIR                             D        0  Tue Jul  9 22:55:05 2019

                222297343 blocks of size 4096. 112725217 blocks available


# mount -vt cifs -o username=svc_test,domain=testdomain,password=abc*33_320 //192.168.1.11/Test /Test 
mount: //192.168.1.11/Test is write-protected, mounting read-only
mount: cannot mount //192.168.1.11/Test read-only

While above command fails to mount the share, OS log (/var/log/messages) records below errors.

CIFS VFS: Send error in SessSetup = -13
CIFS VFS: cifs_mount failed w/return code = -13
Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE


If I choose a different/higher version (other than option vers=1.0) the error message would change to

CIFS VFS: Send error in SessSetup = -13
CIFS VFS: cifs_mount failed w/return code = -13
Status code returned 0xc000006d STATUS_LOGON_FAILURE



While the attempts failed, the account gets locked automatically due to MS Windows' security and I would end up receiving NT_STATUS_ACCOUNT_LOCKED_OUT error and had to reach-out to admins to unlock this from Windows host.

If you have tried everything else related to this error --
# Share level Permissions
# Linux level Permissions
# Global Config Parameters
# SMB and NMB Process Status (should be healthy and running ;))
# Able to list share using smbclient command
# Misc troubleshooting
# Still not able to mount from script, command-line etc, escaped special characters too

Solution:
Mount command in my case was not able to parse complete password as it is, the reason was probably - presence of an asterisk in password broke the command in between.

The password was reset by Windows admin to a simpler one (excluding any of  $ % * @ characters) and mount command was able to complete.

Cheers!
Anurag


Friday, 24 May 2019

Query to Check Index Fragmentation (DBA_Indexes)

Hey everyone, below query is useful to quickly get an approximate fragmentation statistics for your database.

Although - the best method is to use ANALYZE INDEX and then checking INDEX_STATS table for exact status, below would still be useful to have an apprx result.

Variables
&owner <- Specify Main Schema (wildcard % for all schemas -- not recommended)
&tab <- Table Name (in case there is a specific table, use % for all tables in an schema)
&quality <- Lower is poor i.e. higher fragmentation (recommended 40 if you are unsure)

--Enjoy--

col num_rows FORMAT   999G999G999G999
col blocksize new_value blocksize noprint

set termout off
 SELECT to_number(SUBSTR(VALUE,1,40)) blocksize
   FROM sys.v_$parameter
  WHERE name = 'db_block_size';
set termout on

SELECT
  QUALITY,
  NUM_ROWS,
  INDEX_NAME,
  TABLE_NAME,
  last_analyzed,
  ROWLEN,
  LEAVES,
  NET_MB,
  GROSS_MB
FROM
( SELECT
    SUBSTR(I.INDEX_NAME, 1, 30) INDEX_NAME,
    SUBSTR(I.TABLE_NAME, 1, 30) TABLE_NAME,
    I.NUM_ROWS                  NUM_ROWS,
    i.last_analyzed,
    SUM(TC.AVG_COL_LEN + 1) + 7 ROWLEN,
    I.LEAF_BLOCKS               LEAVES,
    ROUND((SUM(TC.AVG_COL_LEN + 1) + 7) * I.NUM_ROWS / 1000000, 0) NET_MB,
    ROUND(I.LEAF_BLOCKS * (&blocksize -100 - 23 * NVL(I.INI_TRANS, 2)) *
      (1 - NVL(I.PCT_FREE, 10) / 100) / 1000000, 0)                GROSS_MB,
    ROUND((SUM(TC.AVG_COL_LEN + 1) + 7) * I.NUM_ROWS /
      (I.LEAF_BLOCKS * (&blocksize -100 - 23 * NVL(I.INI_TRANS, 2)) *
      (1 - NVL(I.PCT_FREE, 10) / 100)) * 100, 0)                   QUALITY
  FROM DBA_INDEXES     I,
       DBA_IND_COLUMNS IC,
       DBA_TAB_COLUMNS TC
 WHERE I.INDEX_NAME   = IC.INDEX_NAME
   AND I.OWNER        = IC.INDEX_OWNER
   AND TC.TABLE_NAME  = IC.TABLE_NAME
   AND TC.OWNER       = IC.INDEX_OWNER
   AND TC.COLUMN_NAME = IC.COLUMN_NAME
   AND I.INDEX_TYPE   = 'NORMAL'
   AND I.LEAF_BLOCKS  > 1
   AND I.OWNER      like upper ( '&owner' )
   AND I.OWNER  not like upper ( 'SYS%' )
   and i.table_name like upper ( '&tab' )
   and i.num_rows   >  1000
 GROUP BY
    I.NUM_ROWS,
    i.last_analyzed,
    I.LEAF_BLOCKS,
    I.INDEX_NAME,
    I.TABLE_NAME,
    I.INI_TRANS,
    I.PCT_FREE
)
WHERE
  QUALITY < &quality
ORDER BY
  QUALITY;



Cheers!!
Anurag

Sunday, 21 April 2019

Query to Shrink Datafiles

Below query can be used to generate ALTER statements for resizing datafiles that have either overgrown or were added with large initial size and were never used.

set lines 150 pages 1000
col command for a150
SELECT ceil( blocks*(a.BlockSize)/1024/1024) "Current Size",
   ceil( (nvl(hwm,1)*(a.BlockSize))/1024/1024 ) "Smallest Poss.",
   ceil( blocks*(a.BlockSize)/1024/1024) -
   ceil( (nvl(hwm,1)*(a.BlockSize))/1024/1024 ) "Savings",
   'Alter Database Datafile '''|| file_name || ''' Resize ' ||
      ceil((nvl(hwm,1)*(a.BlockSize))/1024/1024/100)*100  || 'M;' "Command"
FROM (SELECT a.*, p.value BlockSize FROM dba_data_files a
JOIN v$parameter p ON p.Name='db_block_size') a
LEFT JOIN (SELECT file_id, max(block_id+blocks-1) hwm FROM dba_extents GROUP BY file_id ) b
ON a.file_id = b.file_id
WHERE ceil( blocks*(a.BlockSize)/1024/1024) - ceil( (nvl(hwm,1)*(a.BlockSize))/1024/1024 )
   > 1024 /* Minimum MB it must shrink by to be considered. */
ORDER BY "Savings" Desc;


Cheers,
Anurag

ORA-39097 ORA-39065 ORA-12805 During Export (expdp)

There are couple of reasons you may be facing below errors during export of database/schema/tables using datapump (expdp).

ORA-39097: Data Pump job encountered unexpected error -12805
ORA-39065: unexpected master process exception in DISPATCH
ORA-12805: parallel query server died unexpectedly


One of a common problem in RAC environment occurs when using a non-shared location and using parallelism. In this case you should simply use cluster=n parameter in your export command which will run export only on node local to expdp command.

However, I experienced this error even after parameters were correctly set and the job started failing all of sudden.

Above errors were accompanied with below messages in alertlog as well.

IPC Send timeout detected. Receiver ospid 22767 [oracle@********* (PPA6)]
Errors in file /u01/app/oracle/diag/rdbms/orcl/ORCL/trace/ORCL1_ppa6_22767.trc:

IPC Send timeout detected. Sender: ospid 43074 [oracle@********* (DM00)]
Receiver: inst 2 binc 2 ospid 99228


Upon investigation I observed below two sessions from SYS user. Since I was running export using SYS user, I filtered this with user SYS

Select a.inst_id, a.sid, a.serial#, a.username, substr(a.program,1,10) program, a.status, a.logon_time, substr(a.event,1,25) event, substr(machine,1,25) machine
from gv$session a
where a.username='SYS'
and a.sid <> (select sys_context('USERENV','SID') from dual)
order by a.logon_time;

  SID SERIAL# INST USER PROGRAM    STATUS   LOGON_TIME         SQL_TEXT                                 EVENT
----- ------- ---- ---- ---------- -------- ------------------ ---------------------------------------- -----------------
 1489   22849    1 SYS  ude@****** ACTIVE   21-APR-19 08:05:27 BEGIN          SYS.DBMS_DATAPUMP.GET_STA PL/SQL lock timer
 1489   22849    1 SYS  ude@****** ACTIVE   21-APR-19 08:05:27 SELECT operation, master_id, state, flag PL/SQL lock timer


For some reason (probable bug or hanged process) above processes caused failure in expdp with ORA-39065 and ORA-12805. The session was killed and export was able to complete without errors.

SQL> Alter system kill session '1489,22849' immediate;
System altered.


By the time I wrote this blog I had not seen a similar documented issues on Oracle support, hope this helps someone.

Thanks!
Anurag

Sunday, 14 April 2019

AUD$ Purge Script (Audit Table Maintenance) - Oracle

Here I share a direct-to-use AUD$ table maintenance script. For detailed information please visit my other post on table maintenance.

Pre-requisite:

Clean-up should be initialized using DBMS_AUDIT_MGMT package. Below is a sample code to initialize for 40 days (960 hours).

BEGIN
DBMS_AUDIT_MGMT.INIT_CLEANUP(
 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
 DEFAULT_CLEANUP_INTERVAL => 24*40 /* hours x days */
);
END;
/


In case you see this error:
ORA-46267: Insufficient space in 'SYSAUX' tablespace, cannot complete operation

Change the default tablespace to 'USERS' or any other tablespace which has sufficient space.
UPDATE dam_config_param$ SET string_value='USERS' WHERE audit_trail_type#=1 AND param_id=22;
Commit;


Change it back after INIT_CLEANUP is successful
UPDATE dam_config_param$ SET string_value='SYSAUX' WHERE audit_trail_type#=1 AND param_id=22;
Commit;

Script:

Below script would
# Create a job using DBMS_AUDIT_MGMT.CREATE_PURGE_JOB to purge using LAST_ARCH_TS

# Modify job to start from 15th April 2019 at 01:00 AM and run every 24 hours
# Modify job to Set LAST_ARCH_TS (last archive timestamp) to purge audit data older than 40 days
# Add notification to alertlog when job starts and completes

BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
  AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
  AUDIT_TRAIL_PURGE_INTERVAL => 24,
  AUDIT_TRAIL_PURGE_NAME => 'JOB_PURGE_AUDIT_TRAIL',
  USE_LAST_ARCH_TIMESTAMP => TRUE
);
DBMS_SCHEDULER.SET_ATTRIBUTE (
  name      => 'JOB_PURGE_AUDIT_TRAIL',
  attribute => 'START_DATE',
  value => TO_TIMESTAMP('15-APR-2019 01:00','DD-MON-YYYY HH24.MI'));
DBMS_SCHEDULER.SET_ATTRIBUTE (
  name      => 'JOB_PURGE_AUDIT_TRAIL',
  attribute => 'JOB_ACTION',
  value => 'BEGIN
   /* AUD$ Purge Script - Anurag T. - 14/Apr/2019 */
   SYS.DBMS_SYSTEM.KSDWRT(2, ''Starting Audit Data Purge - At'');
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
   (
   AUDIT_TRAIL_TYPE => SYS.DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   LAST_ARCHIVE_TIME => SYSTIMESTAMP - 40
   );
   SYS.DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(15, TRUE);
   SYS.DBMS_SYSTEM.KSDWRT(2, ''Completed Audit Data Purge - At'');
END;');

DBMS_AUDIT_MGMT.SET_PURGE_JOB_STATUS(
  audit_trail_purge_name      => 'JOB_PURGE_AUDIT_TRAIL',
  audit_trail_status_value    => DBMS_AUDIT_MGMT.PURGE_JOB_ENABLE);
END;
/


Verify:

SELECT * FROM DBA_AUDIT_MGMT_CLEANUP_JOBS;
SELECT * FROM DBA_AUDIT_MGMT_CLEAN_EVENTS;
SELECT * FROM DBA_AUDIT_MGMT_LAST_ARCH_TS;


Defragment Audit Tables (Optional)

In case size of your AUD$ of FGA_LOG$ table is still large after purge, it can be defragmented in 11g onwards using below commands or using alter table enable row movement and alter table shrink in older releases

First 4 commands move all tables to another tablespace and subsequent 4 commands will move them back to original tablespace.

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   audit_trail_location_value  => 'USERS');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   audit_trail_location_value  => 'SYSAUX');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
   audit_trail_location_value  => 'SYSAUX');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   audit_trail_location_value  => 'SYSAUX');
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
   audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   audit_trail_location_value  => 'SYSAUX');
END;
/


Done! Cheers.
Anurag

Friday, 22 March 2019

Sending Emails in HTML from Linux/Unix - sendmail

Hey everyone, it was quite a mystery for me since long time on how to send preformatted or html generated reports from a server and even though there are a lot of good people sharing knowledge on this part. Here I share a portion of code I developed (with their help) to use for this purpose.

Below is one of the shell script code you would need to use with "sendmail" utility to achieve sending mails in HTML format directly from any server to a list of recipients.

FROM="<user>@<yourdomain>"
TO="<username>@<yourdomain>.com, <username2>@<yourdomain>.com"
SUBJECT="An HTML Formatted Mail Body Test"
delimiter="`date +%Y%m%d%H%M%S`"


cat <<At | /usr/sbin/sendmail -t
From: ${FROM}
To: ${TO}
Subject: ${SUBJECT}
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="${delimiter}"

--${delimiter}
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

<HTML>

`cat /tmp/your_html_file_or_message.html`
</HTML>

--${delimiter}--
At


You could echo a variable, write html code by hand or can expand an html file content like I did in email body between HTML tags.

Feel free to leave a comment if you would like to understand explanation behind any part of above code.

Even though you could use mailx to attach html file using -a switch. Having a report in mail body saves time of opening an attachment.

Cheers!
Anurag

Monday, 28 January 2019

Shrink of LOBs with Shrink Space Compact?

Hey everyone, been a while since I posted. A recent requirement to shrink LOB that had free space totaling to 300+ GB made me wonder if that can be done without causing any impact on another sessions.

You would read a lot on Oracle Support and various blogs about use of SHRINK SPACE COMPACT on a table, but unfortunately by the time of writing this, I didn't found anyone mentioned if it can be used for LOB as well.

Problem?
Shrink operation when running for several hours can affect user sessions during adjustment of HWM. Since you do not have control over completion time of an LOB shrink, this small duration of time can still impact your environment. Thus to have a greater control, you can breakdown shrink operation in two statements as below:

Alter table lob_tab modify LOB(data) (shrink space compact);
Alter table lob_tab modify LOB(data) (shrink space);


Test Case:
CREATE TABLE lob_tab (
  id NUMBER,
  data CLOB
)
LOB(data) STORE AS DATAVAL (DISABLE STORAGE IN ROW);


 Populate this table by running below statements several times randomly or in a loop:

INSERT INTO lob_tab VALUES (1, 'TEST1');
INSERT INTO lob_tab VALUES (3, 'TEST3');
INSERT INTO lob_tab VALUES (2, 'TEST2');

INSERT INTO lob_tab select * from lob_tab where id=2;
INSERT INTO lob_tab select * from lob_tab where id=1;
INSERT INTO lob_tab select * from lob_tab where id=3;


Commit;

Check Size of Table:

col segment_name for a25
col segment_type for a20
col gb for 9999.99
select segment_name, segment_type, sum(bytes/1024/1024) MB
from dba_segments
where owner='ORA10G'
group by segment_name, segment_type
order by 1,2
/

SEGMENT_NAME              SEGMENT_TYPE                 MB
------------------------- -------------------- ----------
DATAVAL                   LOBSEGMENT                  400
LOB_TAB                   TABLE                         2
SYS_IL0000046208C00002$$  LOBINDEX                      3


Delete some data and check fragmentation in Megabytes or Gigabytes using DBMS_SPACE.SPACE_USAGE procedure:

set serveroutput on
declare
v_unformatted_blocks number;
v_unformatted_bytes number;
v_fs1_blocks number;
v_fs1_bytes number;
v_fs2_blocks number;
v_fs2_bytes number;
v_fs3_blocks number;
v_fs3_bytes number;
v_fs4_blocks number;
v_fs4_bytes number;
v_full_blocks number;
v_full_bytes number;
total_fs_bytes number;
v_obj_name varchar2(40);
v_obj_type varchar2(10);
Cursor obj IS SELECT segment_name, segment_type FROM dba_segments
                WHERE segment_name in ('DATAVAL')
                ORDER BY segment_type;
begin
    OPEN obj;
    LOOP
    FETCH obj INTO v_obj_name, v_obj_type;
    EXIT WHEN obj%NOTFOUND;
    IF v_obj_type = 'LOBSEGMENT'
    THEN
        v_obj_type := 'LOB';
    END IF;
    dbms_space.space_usage ('ORA10G', v_obj_name , v_obj_type, v_unformatted_blocks, v_unformatted_bytes, v_fs1_blocks
    , v_fs1_bytes, v_fs2_blocks, v_fs2_bytes,v_fs3_blocks, v_fs3_bytes, v_fs4_blocks, v_fs4_bytes, v_full_blocks, v_full_bytes);
    total_fs_bytes:=round((v_fs1_bytes + v_fs2_bytes + v_fs3_bytes + v_fs4_bytes + v_unformatted_bytes)/1048576,2);
    dbms_output.put_line('Object-> ' || v_obj_name || ', Type-> ' || v_obj_type || ', Free MB-> ' || total_fs_bytes);
    END LOOP;
end;
/


Object-> DATAVAL, Type-> LOB, Free MB-> 107.57
 


Run Shrink
Alter table lob_tab modify LOB(data) (shrink space compact);

Check again using space usage script:
Object-> DATAVAL, Type-> LOB, Free MB-> 107.57

Free space didn't changed? - this is because shrink space compact has just adjusted data blocks and not the HWM. Now run shrink space to perform this:

Alter table lob_tab modify LOB(data) (shrink space);

Check again using space usage script:
Object-> DATAVAL, Type-> LOB, Free MB-> 2.11


Enjoy!