Warning: Creating default object from empty value in /home/customer/www/gavinsoorma.com/public_html/wp-content/themes/specular/admin/inc/class.redux_filesystem.php on line 29
Oracle 12c New Feature – Heat Map and Automatic Data Optimization | Oracle DBA – Tips and Techniques
News
Perth, Australia
+ (61) 417713124
prosolutions@gavinsoorma.com

Oracle 12c New Feature – Heat Map and Automatic Data Optimization

  • Posted by Gavin Soorma
  • On September 10, 2013
  • 0 Comments

One of the main goals of an ILM (Information Life Cycle) policy is to reduce the cost of storage, improve performance and access times for both current as well as archived data and retain data long enough so as to satisfy regulatory laws and statutes related to data preservation.

As data grows rather than buy and install additional storage as is normally  the case, organizations can adopt both storage tiering as well as compression tiering models to achieve their ILM goals and policies.

Storage tiering could be related to storing older and dormant data on cheaper low cost storage while retaining the current or the most volatile data on high performance storage.

Using compression, current or OLTP data can be compressed using  lower compression levels and the data which is rarely accessed as well as rarely modified could be subject to higher levels of compression and also moved to low cost storage.

Oracle 12c as a new feature called Heat Map which tracks and marks data even down to the row and block level  as it goes through life cycle changes.

ADO or Automatic Data  Optimization  works with the Heat Map feature and allows us to create policies  at the tablespace, object and even row level which specify conditions to dictate when data will be moved or compressed based on statistics related to the data usage.

Real-time data access statistics are collected in memory in the V$HEAT_MAP_SEGMENT view and then regularly flushed by DBMS_SCHEDULER_JOBS to tables on disk like HEAT_MAP_STAT$ which is presented to the DBA via views like  DBA_HEAT_MAP_SEG_HISTOGRAM and DBA_HEAT_MAP_SEGMENT.

Heat Map is enabled at the instance level by setting the parameter HEAT_MAP to ON.

Let us now look at an example of how we can use Heat Map and ADO to perform compression level tiering of data.

This example is based on the 12c database tutorials which can be found at the Oracle Learning Library site. This site has many good introductory tutorials for anyone wanting to try and work with some of the important new features now available in Oracle 12c .

In this example we will create an ILM policy which will compress the MYOBJECTS table if there have been no modifications performed on the table since the past 30 days.

 So the assumption we are following is that dormant data can be compressed to save storage and since the data is not frequently accessed, compressing the data will not impact the OLTP type performance.

We create a procedure which simulates the passage of time in this example so that the table qualifies for ADO action (compression)  even though actually 30 days have not elapsed.

 

CREATE OR REPLACE PROCEDURE set_stat (object_id      number,
 data_object_id number,
 n_days         number,
 p_ts#            number,
 p_segment_access number)
 as
 begin
 insert into sys.heat_map_stat$
 (obj#,
 dataobj#,
 track_time,
 segment_access,
 ts#)
 values
 (object_id,
 data_object_id,
 sysdate - n_days,
 p_segment_access,
 p_ts# );
 commit;
 end;
 /

 

We then grant execute on this procedure to SCOTT and then turn on the Heat Map option at the instance level.

 SQL> grant execute on set_stat to scott;
 SQL> alter system set heat_map=on scope=both;

After enabling heat map tracking, we set the heat map tracking start time back 30 days to ensure statistics logged after this time are valid and considered by Automatic Data Optimization (ADO).

exec dbms_ilm_admin.set_heat_map_start(start_date => sysdate - 30)

We now connect as SCOTT and populate the table with data.

SQL> create table myobjects as select * from all_objects;
Table created.

SQL> declare
 sql_test clob;
 begin
 for i in 1..5
 loop sql_test := 'insert /*+ append */ into scott.myobjects select * from scott.myobjects';
 execute immediate sql_test;
 commit;
 end loop;
 end;
 /

Note the size of the (uncompressed) table is 320 MB

SQL> select sum(bytes)/1048576 from user_segments where 
    segment_name='MYOBJECTS';

SUM(BYTES)/1048576
 ------------------
 320

SQL> select count(*) from myobjects;

COUNT(*)
 ----------
 2360288

Now verify that heat map tracking collected statistics for the SCOTT.MYOBJECTS table.

SQL> alter session set nls_date_format='dd-mon-yy hh:mi:ss';

Session altered.

select OBJECT_NAME,SEGMENT_WRITE_TIME , SEGMENT_READ_TIME, FULL_SCAN
 FROM dba_heat_map_segment
 WHERE OBJECT_NAME='MYOBJECTS'
 AND OWNER = 'SCOTT';

OBJECT_NAME
 --------------------------------------------------------------------------------
 SEGMENT_WRITE_TIME SEGMENT_READ_TIME  FULL_SCAN
 ------------------ ------------------ ------------------
 MYOBJECTS
 09-sep-13 02:39:09

col "Segment write" format A14
col "Full Scan" format A12
col "Lookup Scan" format a12

 select object_name, track_time "Tracking Time",
 segment_write "Segment write",
 full_scan "Full Scan",
 lookup_scan "Lookup Scan"
 from DBA_HEAT_MAP_SEG_HISTOGRAM
 where object_name='MYOBJECTS'
 and owner = 'SCOTT';

OBJECT_NAME
 --------------------------------------------------------------------------------
 Tracking Time      Segment write  Full Scan    Lookup Scan
 ------------------ -------------- ------------ ------------
 MYOBJECTS
 09-sep-13 02:40:14 NO             YES          NO

Confirm that the table is not compressed currently

SQL> select compression, compress_for from dba_tables where table_name = 'MYOBJECTS' and owner = 'SCOTT';

COMPRESS COMPRESS_FOR
 -------- ------------------------------
 DISABLED

Add a compression policy on SCOTT.MYOBJECTS table.

ALTER TABLE scott.myobjects ILM ADD POLICY ROW STORE 
COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;

Verify that the policy has been added.

 select policy_name, action_type, scope, compression_level,
 condition_type, condition_days
 from   user_ilmdatamovementpolicies
 order by policy_name;

POLICY_NAME
 --------------------------------------------------------------------------------
 ACTION_TYPE SCOPE   COMPRESSION_LEVEL              CONDITION_TYPE
 ----------- ------- ------------------------------ ----------------------
 CONDITION_DAYS
 --------------
 P3
 COMPRESSION SEGMENT ADVANCED                       LAST MODIFICATION TIME
 30

 select policy_name, object_name, inherited_from, enabled
 from user_ilmobjects;

POLICY_NAME
 --------------------------------------------------------------------------------
 OBJECT_NAME
 --------------------------------------------------------------------------------
 INHERITED_FROM       ENA
 -------------------- ---
 P3
 MYOBJECTS
 POLICY NOT INHERITED YES

 select * from user_ilmpolicies;
POLICY_NAME
 --------------------------------------------------------------------------------
 POLICY_TYPE   TABLESPACE                     ENABLED
 ------------- ------------------------------ -------
 P3
 DATA MOVEMENT                                YES

We now want to simulate a situation where 30 days have passed without any modification to the MYOBJECTS table.
Using the set_stat procedure we had created earlier, we are turning the heat map statistics clock back by 30 days.
As SYS:

alter session set nls_date_format='dd-mon-yy hh:mi:ss';

declare
 v_obj# number;
 v_dataobj# number;
 v_ts#      number;
 begin
 select object_id, data_object_id into v_obj#, v_dataobj#
 from all_objects
 where object_name = 'MYOBJECTS'
 and owner = 'SCOTT';
 select ts# into v_ts#
 from sys.ts$ a,
 dba_segments b
 where  a.name = b.tablespace_name
 and  b.segment_name = 'MYOBJECTS';
 commit;
 sys.set_stat
 (object_id         => v_obj#,
 data_object_id    => v_dataobj#,
 n_days            => 30,
 p_ts#             => v_ts#,
 p_segment_access  => 1);
 end;
 /

At this stage we need to flush the in memory heat map data to the persistent tables and views on disk.

A rough and dirty way to do this is to bounce the database rather than wait for the MMON background process to kick in and do the job.

After restarting the database we now see that the heap map; statistics are showing that the table was last accessed more than a month ago in August.

select object_name, segment_write_time
 from dba_heat_map_segment
 where object_name='MYOBJECTS';

 OBJECT_NAME
 --------------------------------------------------------------------------------
 SEGMENT_W
 ---------
 MYOBJECTS
 10-AUG-13

Normally the ADO related jobs are run in the maintenance window. Rather than wait for this window to occur we trigger it manually via the DBMS_ILM.EXECUTE_ILM procedure called from this PL/SQL block.

SQL> conn scott/tiger
 Connected.
declare
v_executionid number;
begin
dbms_ilm.execute_ILM (ILM_SCOPE      => dbms_ilm.SCOPE_SCHEMA,
                      execution_mode => dbms_ilm.ilm_execution_offline,
                      task_id        => v_executionid);
end;
/

We can now query the USER_ILMTASKS view and see that the ADO job has been started and the ILM policy we have defined has been earmarked for execution.

select task_id, start_time as start_time from user_ilmtasks;
TASK_ID
----------
START_TIME
---------------------------------------------------------------------------
7
09-SEP-13 02.50.24.895834 PM
select task_id, job_name, job_state, completion_time completion from user_ilmresults;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
8
ILMJOB1116
JOB CREATED
select task_id, policy_name, object_name, selected_for_execution, job_name
from user_ilmevaluationdetails
where task_id=11;
TASK_ID
----------
POLICY_NAME
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
SELECTED_FOR_EXECUTION
------------------------------------------
JOB_NAME
--------------------------------------------------------------------------------
8
P3
MYOBJECTS
TASK_ID
----------
POLICY_NAME
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
SELECTED_FOR_EXECUTION
------------------------------------------
JOB_NAME
--------------------------------------------------------------------------------
SELECTED FOR EXECUTION
ILMJOB1116

We can now see that the table has been compressed and the ADO job has been completed.

select task_id, job_name, job_state, completion_time completion from user_ilmresults;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
11
ILMJOB1118
COMPLETED SUCCESSFULLY
09-SEP-13 03.26.44.408970 PM

 select compression, compress_for FROM user_tables where table_name = 'MYOBJECTS';
COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

After the compression the table has been reduced in size from 320 MB to 60 MB!

select sum(bytes)/1048576 from user_segments where segment_name='MYOBJECTS';
SUM(BYTES)/1048576
------------------
60

We can remove the ILM policy on the table if required.

alter table scott.myobjects ilm delete_all;
 

0 Comments

Leave Reply

Your email address will not be published. Required fields are marked *