News
Perth, Australia
+ (61) 417713124
prosolutions@gavinsoorma.com

LOB's – some Performance Tuning considerations

  • Posted by Gavin Soorma
  • On June 23, 2009
  • 2 Comments
  • If LOB data is less than 4000 bytes, then it can be stored in line or on the same database page as the rest of the row data. LOB data can be stored in line only when the block size is large enough to accommodate it.
  • LOB data that is stored out of line, on database pages that are separate from the row data, is accessed (read and written) by Oracle in CHUNK size pieces where CHUNK is specified in the LOB storage clause (see Section 11.2 for more information about the CHUNK option). CHUNK must be an integer multiple of DB_BLOCK_SIZE and defaults to DB_BLOCK_SIZE if not specified. Generally, it is more efficient for Oracle to access LOB data in large chunks, up to 32 KB. However, when LOB data is updated, it may be versioned (for read consistency) and logged both to the rollback segments and the redo log in CHUNK size pieces. If updates to LOB data are frequent then it may be more efficient space wise to manipulate smaller chunks of LOB data, especially when the granularity of the update is much less than 32 KB.

 

  • If the same BLOB is to be accessed frequently, set the table LOB CACHE parameter to TRUE

 

  • Use the CACHE option on interMedia column objects containing BLOBs if the same BLOB data is to be accessed frequently. The CACHE option puts the data into the database buffer and makes it accessible for subsequent read operations. If you specify CACHE, then LOGGING is used; you cannot have CACHE and NOLOGGING.

 

  • Use the NOCACHE option (the default) if BLOB data is to be read only once or infrequently, or if you have too much BLOB data to cache, or if you are reading lots of images but none more frequently than others.

 

  • Set the CHUNK option to the number of blocks of interMedia column objects containing BLOB data that are to be accessed at one time. That is, the number of blocks that are to be read or written using the object.readFromSource or object.writeToSource interMedia audio and video object methods or call, OCILobRead( ), OCILobWrite( ), DBMS_LOB.READ( ), or DBMS_LOB.WRITE( ) during one access of the BLOB value. Note that the default value for the CHUNK option is 1 Oracle block and does not vary across systems. If only 1 block of BLOB data is accessed at a time, set the CHUNK option to the size of 1 block. For example, if the database block size is 2K, then set the CHUNK option to 2K.

 

  • Set the CHUNK option to the next largest integer multiple of database block size that is slightly larger than the audio, image, or video data size being inserted. Specifying a slightly larger CHUNK option allows for some variation in the actual sizes of the multimedia data and ensures that the benefit is realized. For large-sized media data, a general rule is to set the CHUNK option as large as possible. The maximum is 32K in Oracle9i. For example, if the database block size is 2K or 4K or 8K and the image data is mostly 21K in size, set the CHUNK option to 24K.

 

  • If you explicitly specify the storage characteristics for the interMedia column object containing a BLOB, make sure that the INITIAL and NEXT parameters for the BLOB data segment storage are set to a size that is larger than the CHUNK size. For example, if the database block size is 2K and you specify a CHUNK value of 8K, make sure that the INITIAL and NEXT parameters are at least 8K, preferably higher (for example, at least 16K).

 

  • If the interMedia column object containing a BLOB is small (that is, less than 4000 bytes), then storing the BLOB data out of line will decrease performance. However, storing the BLOB in the row increases the size of the row. This has a detrimental impact on performance if you are doing a lot of base table processing, such as full table scans, multiple row accesses (range scans), or doing many UPDATE or SELECT statements to columns other than the interMedia column objects containing BLOBs. If you do not expect the BLOB data to be less than 4000 bytes, that is, if all BLOBs are big, then the default is the best choice because:

               The LOB data is automatically moved out of line once it gets bigger than 4000 bytes.
               Performance can be better if the BLOB data is small (less than 4000 bytes including control information) and     is stored inline because the LOB locator and the BLOB data can be retrieved in the same buffer, thus reducing I/O     operations.

 

2 Comments

Amit M
  • Jul 22 2010
1) BLOBs and CLOBs are identical in creation and use, their major difference is in that BLOBs are used for binary data (much like LONG RAW) while CLOBs are used for single byte character storage (like VARCHAR2). Recommended to use CLOB if data is VARCHAR, and BLOB if data is going to be graphics, image, etc. 2) Always specify the LOB storage clause, if you force the system to do a default storage each time a BLOB or CLOB exceeds 4000 bytes you could cause datafile fragmentation and performance problems. The LOB storage clause gives you control instead of the system. 3) If most LOBs in your database tables are small in size—8K bytes or less—and only a few rows have LOBs larger than 8K bytes, then use the following guidelines to maximize database performance: • Use ENABLE STORAGE IN ROW • Set the DB_BLOCK_SIZE initialization parameter to 8K bytes and use a chunk size of 8K bytes 4) When storing a large LOB inside an Oracle data file, you should consider a 32k blocksize tablespace (to avoid chaining) and carefully set pctfree and pctused to minimize wasted space within the Oracle data blocks. Different blocksizes can improve performance in a variety of ways: Contention reduction - small rows in a large block perform worse under heavy DML than large rows in a small blocksize. Faster updates - Heavy insert/update tables can see faster performance when segregated into another blocksize which is mapped to a small data buffer cache. Smaller data buffer caches often see faster throughput performance. Reduced Pinging - RAC can perform far faster with smaller blocksizes, reducing cache fusion overhead. Less RAM waste - Moving random access small row tables to a smaller blocksize (with a corresponding small blocksize buffer) will reduce buffer waste and improve the chance that other data blocks will remain in the cache. Faster scans - Tables and indexes that require full scans can see faster performance when placed in a large blocksize.
Gavin Soorma
  • Jul 22 2010
thanks for the good information Amit - have you worked much with 11g Secure Files...?

Leave Reply

Your email address will not be published. Required fields are marked *