Kevin Kempf's Blog

May 11, 2010

Advanced Compression and Interface Tables

Filed under: 11g, 11i, advanced compression — kkempf @ 2:57 pm

Advanced Compression – More Stories from the Front Lines

We’re several months into advanced compression, after the Dataguard bug/debacle, and we’re really quite happy with the product.  Practically speaking, the ERP runs from the buffer cache, and thus our I/O is down to a trickle.  As a matter of course, I began to compress tables, in mass, during the evening in an effort to just “get everything compressed”.  Not that the gain is so good (most of the big tables are done) but in the interest of standardization.  I don’t want to have to go look and see “is it compressed?”.  Recently, however, I arrived one morning to a standard email report I have delivered which told of dozens and dozens of lines stuck in the INV.MTL_TRANSACTIONS_INTERFACE table.  For those not familiar with discrete manufacturing, this is the table where material moves and completions are processed among other things (due at least in part to Highjump, a bolt on application which allows workers to use wireless hand held terminals to do their job).  In other words, in a manufacturing environment, it’s a busy place and it needs to work correctly.

Hello, ORA-00600

The alert log doesn’t lie.  This entry corresponded to the failure times of the material transactions:

Errors in file /u01/appprod/oracle/proddb/diag/rdbms/prod/PROD/trace/PROD_ora_8556.trc  (incident=9989):
ORA-00600: internal error code, arguments: [kdsgrp1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /u01/appprod/oracle/proddb/diag/rdbms/prod/PROD/incident/incdir_9989/PROD_ora_8556_i9989.trc

The trace file revealed the RDBMS file# and block#; running that to ground confirmed that it was the INV.MTL_TRANSACTIONS_INTERFACE table.   Searching Metalink/MOS on the term KDSGRP1-1 (and changing the default search so it included bugs) , I found bug 9072813.  This was for an HP environment, but was such a similar hit that I pursued the “base bug” 8771916 which, although you couldn’t see the bug, had a patch corresponding to it.  A very, very recent patch, entitled “8771916: ORA-00600 [KDSGRP1] WHEN DOING AN UPDATE”.

Summing it all up

The first thing I did was to uncompress the MTL_TRANSACTIONS_INTERFACE table.  It went from 16 blocks back to 800, and .13 Mb back to 6.25 Mb.  That fixed the issue, but there’s more to the story.  The bug in question appears to pop when there’s massive write contention to a table (read: block).  I had just reduced the number of blocks in this table 50 fold; that’s bound to cause more contention.  In the end my fix was to also apply patch 8771916 to my production environment, knowing there could be other tables where this could occur.  If you’re running Advanced Compression, I’d urge you to at least look into the bug/patches.  If you’re running 11i and compression, I’d definitely avoid compressing interface tables.


March 8, 2010

Advanced Compression with 11i, follow up impressions

Filed under: 11g, 11i, advanced compression — kkempf @ 9:31 am

We’ve been running Advanced Compression  for three week now without incident, and I have to say, things look impressive.

  • IO is way down (most of the relevant objects residing in the buffer cache)
  • CPU is up slightly (compress/uncompress overhead, we presume)
  • Performance is as good as or better than I can ever recall within the professional forms

As of today, I have only a handful of tables compressed:

SQL> select count(*) from all_tables where compression = 'ENABLED';


Yet this is the most telling indicator I can show you, with only these 55 objects compressed:

February 22, 2010

The incredible shrinking database

Filed under: 11g, 11i, advanced compression — kkempf @ 8:56 am

It’s been a week since I applied RDBMS patch 8277580, and compressed the one table which was causing data corruption on the standby database (INV.CST_ITEM_COST_DETAILS).   I’ve been watching very closely, and the behavior I saw last spring has not come up again.  So I’ll give credit where credit is due and say the fix Oracle provides in the patch works as advertised, although be it a year late.

Yesterday during our quiet period (we’re a 24/7 shop, but on Sunday afternoons there are only a half dozen folks logged into the ERP), I began compressing more tables.

My general approach is pretty simple:

1. Determine what tables are consuming the most space in the buffer cache

2. Compress those tables & rebuild their indexes

  • alter table X compress for all operations move;
  • alter index X rebuild;

3. Repeat

There were 2 exceptions to this; first, I had to compress BOM.CST_ITEM_COST_DETAILS first because I needed to confirm that the bug fix worked.  Second, I got the notion that I should take my largest non-sys/system table (INV.MTL_CYCLE_COUNT_ITEMS) and compress it, to see how it went.   The results are pretty convincing:

INV MTL_CYCLE_COUNT_ITEMS 18833.75 3082.25 15751.5 0.01 157.52
INV MTL_MATERIAL_TRANSACTIONS 2859.13 925.5 1933.63 1 1933.63
INV MTL_TRANSACTION_ACCOUNTS 2002.63 625.38 1377.25 0.97 1335.93
BOM CST_ITEM_COST_DETAILS 1717.38 429.88 1287.5 0.47 605.13
INV MTL_TRANSACTION_LOT_NUMBERS 1209.5 443.63 765.87 0.98 750.55
WIP WIP_TRANSACTIONS 904.63 377.75 526.88 0.99 521.61
AR RA_CUSTOMER_TRX_LINES_ALL 312.13 118 194.13 0.99 192.19
WSH WSH_DELIVERY_DETAILS 330 167.5 162.5 1 162.5
PO RCV_TRANSACTIONS 211.38 62.88 148.5 0.98 145.53
AR RA_CUSTOMER_TRX_ALL 166 37.63 128.37 0.99 127.09
WIP WIP_OPERATIONS 236.88 126.75 110.13 0.96 105.72
APPLSYS WF_ITEM_ACTIVITY_STATUSES 123.5 33.88 89.62 0.94 84.24
WIP WIP_REQUIREMENT_OPERATIONS 165.38 83.38 82 0.99 81.18
INV MTL_SYSTEM_ITEMS_B 382.5 384.5 -2 1 -2
ONT OE_ORDER_LINES_ALL 314.88 320.5 -5.62 0.99 -5.56
SUM 22550.26 6195.26

In short, after compressing just 15 tables I’ve saved 22.5 GB on disk and 6GB in my buffer cache.   A hidden bonus is the index space savings, but I don’t have numbers on that (yet).   It’s too early to make any generalizations about the server and it’s workload, as the cache needs to “right” itself after a weekend.

February 15, 2010

Advanced Compression with 11i, Take 2, Day 1

Filed under: 11g, 11i, advanced compression, Bugs — kkempf @ 10:56 am

So far, so good

After waiting about a year since my first attempt, the timing was finally right for me to patch the core RDBMS with Patch 8277580: ORA-07445: [__INTEL_NEW_MEMCPY()+44].  This was Oracle’s fix for the data corruption I was seeing on my dataguard physical standby after I compressed BOM.CST_ITEM_COST_DETAILS (no other tables seemed affected in this way).

I had my doubts, but nearly 24 hours later, and several cost rollup runs have elapsed without incident. The raw stats look great:

----- ------------------------------ --------------- ------------------ ------------------- -----------------
BOM   CST_ITEM_COST_DETAILS                 1717.38              429.88               .75

By the way, it’s amazing how much “stuff” touches that particular table in a discrete manufacturing 11i environment.  Can you tell when I recreated the indexes?

Rebuild your indexes!

January 20, 2010

Advanced Compression, Take 2

Filed under: 11g, 11i, advanced compression, Oracle — kkempf @ 9:14 am

Throw Away Hours of Work

It’s been almost a year, and I have an SR open that long to prove it.  It’s been in “Waiting on Customer” status for quite some time now.  In fact, they tend to ping me every month or two, in hopes of  closing the ticket.  Fair play, I’d say, since I was waiting on Oracle for a fix for months while they officially acknowledged they even had a problem.  I continue to tell them that once I have a viable environment in which to test this problem and their fix, I will do so.

Turns out, the time has arrived.  We’ve finished regression testing database patch 8277580; this is what support told me to apply last August.  The real problem with testing this patch is twofold.  First, I don’t have the luxury ($) of a TEST environment which includes a physical standby database.  On top of that, making the actual corruption happen is problematic.  The table which was corrupted, every time, on the standby was  BOM.CST_ITEM_COST_DETAILS.  There simply isn’t enough traffic/volume of work going on in any but my non production environment, to cause the corruption to happen in anywhere but PROD.  If money were no object, perhaps I could use some kind of load simulator (Oracle’s or otherwise) to reproduce this failure, but realistically, how much money do I want to spend to implement this product?  So my solution is pretty simple.  Apply the patch on both sides (Primary/Standby) of my PROD system, and wait and see what happens to the standby.  Like clockwork, I always had alert log entries, followed by the database shutdown within 24 hours of compressing this table.  If I continue to get corruption, I don’t know what I’ll do, besides have a long, unpleasant talk with my sales rep.

Regardless, I hope to apply the patch during the next maintenance window in February, which means within a month I should have some solid results to report on this issue.

September 8, 2009

11.2 Install

Filed under: 11g, advanced compression, Oracle, RMAN — kkempf @ 12:10 pm

I bit the bullet and installed 11.2 on my Enterprise Manager/RMAN repository database last week.  It’s a small vm, used mostly by only me, so I figured why not?

First impressions were a little mixed.  I really thought the installer looked great.  It’s come a long way, from when I first started using it (8i).  I wish I’d taken some screen shots, but basically it had a consistent look on all pages, and a nice “progress bar” as you were answering the questions before installing.  In addition, when it was checking pre-requisites, it had the ability to do a “fix it” for kernel parameters which was kind of convenient.  I did have to use yum to install a few packages (don’t know if I had to, but the point is the installer asked for them):

  • elfutils-libelf-devel
  • unixODBC
  • unixODBC-devel

I always do a manual upgrade.  Just too old school to trust the DBUA, so when I ran the pre-requisite script against my 11.1 environment, here’s what it had to say:

SQL> @utlu112i
Oracle Database 11.2 Pre-Upgrade Information Tool    09-03-2009 10:31:53
--> name:          OEMREP
--> version:
--> compatible:    11.1.0
--> blocksize:     8192
--> platform:      Linux x86 64-bit
--> timezone file: V10
Tablespaces: [make adjustments in the current environment]
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 576 MB
.... AUTOEXTEND additional space required: 86 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 129 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 576 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
--> MGMT_TABLESPACE tablespace is adequate for the upgrade.
.... minimum required size: 2932 MB
Flashback: OFF
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
-- No update parameter changes are required.
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
-- No renamed parameters found. No changes are required.
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
-- No obsolete parameters found. No changes are required
Components: [The following database components will be upgraded or installed]
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
Miscellaneous Warnings
WARNING: --> Database is using a timezone file older than version 11.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the database timezone version
.... to the latest version which comes with the new release.
WARNING: --> Database contains schemas with stale optimizer statistics.
.... Refer to the Upgrade Guide for instructions to update
.... schema statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
....   SYS
WARNING: --> log_archive_format must be updated.
.... As of 10.1, log_archive_format requires a %r format qualifier
.... be present in its format string.  Your current setting is:
.... log_archive_format='OEMREP%T%R__%s.arc'.
.... Archive Logging is currently ON, and failure to add the %r to the
.... format string will prevent the upgraded database from starting up.
WARNING:--> recycle bin in use.
.... Your recycle bin is turned on and it contains
.... 203 object(s).  It is REQUIRED
.... that the recycle bin is empty prior to upgrading
.... your database.
.... The command:  PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.

In short: empty your recycle bin, and %r is now a required part of the archivelog format.  No problem, made the minor changes and proceeded with the manual upgrade.  It didn’t complain again, until I was done and wanted to connect to RMAN and upgrade the catalog.

First problem: ZLIB is apparently no longer a valid compression algorithm.  Opened an SR to figure out what happened to it, as it was the only piece of advanced compression that worked.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on oem_disk_backup channel at 09/03/2009 11:51:40
ORA-19962: compression algorithm ZLIB is not valid with release

Next problem: upgrading the recovery catalog failed.  I can only describe my repository as vanilla, but when I tried upgrade catalog I received this error:

[oracle@zso-orabkp-01 env]$ rman catalog  rcat_user/pw@oemrep

Recovery Manager: Release - Production on Thu  Sep 3 12:06:43 2009

Copyright (c) 1982, 2009, Oracle and/or its affiliates.   All rights reserved.

connected to recovery catalog database
PL/SQL package  RCAT_USER.DBMS_RCVCAT version in RCVCAT database is not
recovery catalog is partially upgraded to

RMAN> upgrade  catalog;

recovery catalog owner is RCAT_USER
enter UPGRADE CATALOG command  again to confirm catalog upgrade

RMAN> upgrade catalog;

error creating  remove_null_db_unique_name_rows_from_node
RMAN-00571:  ===========================================================
RMAN-00569:  =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571:  ===========================================================
RMAN-06004: ORACLE  error from recovery catalog database: ORA-02292: integrity constraint
(RCAT_USER.TSATT_F2) violated - child record found

I opened an SR on this issue, looks like a bug.  In the end my solution was to drop catalog and reuild it from scratch, then re-register all the databases.  Once that was done, everything was fine and I’ve been running on 11.2 ever since.   I’m still going over what else may be of benefit in this release, but at least I can say I’m running 11.2 in a production instance!

September 2, 2009

RDBMS 11gR2 Released

Filed under: 11i, advanced compression, Oracle — kkempf @ 8:33 am

I happened to see something about 11gR2 which linked me to this page.  I had expected the release to be in conjunction with Open World in October , but I was surprised to see it now available for download.  As a brief aside, I can see they’ll be promoting this instead in San Francisco.  When I was there in 2001, they were promoting Sun and EMC.  Then in 2006 they were promoting AMD and trying to kill Red Hat.  In 2008 it was HP Exadata.  Well they certainly spread the love.

I found it on edelivery and  OTN for linux x86 and linux x86-64.  I found it odd that I get spam email from Oracle on things completely irrelevant to me (such as Metalink 3 downtime) but I didn’t get a heads up on 11gR2.

You can browse the new features here. Unsurprisingly, Oracle says that is now the terminal 11.1 release, and updates will be delivered through PSU’s from this point forward.  Based on that information, and Oracle Lifetime Support policy, that means that the Premier support life for the major current RDBMS versions looks like this:

10.2 July 2010 (1 Year extension)
11.1  August 2012
11.2  September 2014

If you spend any time on their promotional page, you’ll find there is an “eduquiz” on 11gR2 features.  Since it boasts claims like 100% improvement in vague categories like productivity, it must have been done by a marketing department.  The one I found especially entertaining was in regards to Advanced Compression, so I added an answer which I think they missed in their quiz based on my implementation:


July 27, 2009

Pulling the plug on 11i Advanced Data Compression

Filed under: 11g, advanced compression, Bugs, Dataguard — kkempf @ 10:49 am

This past weekend I uncompressed all my tables, rebuilt the associated indexes, and said adieu to Advanced Compression. Based on my (still open) Dataguard block corruption SR, the admin at work likes to call it Advanced Data Corruption. Regardless, I may have lived with the first issue as a beta tester, but after a second ORA-0600 related to the product cropped up last week, I gave up. When support admits that they have no documentation of the error internally, and begins suggesting the most inane course of action you can come up with, it’s time to give it the boot. It is my opinion that while this product works in the simplest environment, on the whole it is half-baked, not well tested, and not worth being a “guinea pig” or “production beta tester” for it.

Incidentally, the syntax to uncompress (and move, recreating the object completely uncompressed) is:

alter table BOM.MTL_MATERIAL_TRANSACTIONS move nocompress;

I have to say I really wanted this product to work. In principle, it’s a great idea, and in practice, it seems close to achieving what I would say was success:

  • Disk sat idle most of the day because everything was in the buffer cache
  • Most objects compressed extremely well, and total disk consumption fell significantly
  • With the exception of Dataguard (and the unknown OLTP ora-0600), it operated seamlessly with the apps

In the end, however, I’d tell anyone considering it to wait.  It’s not there yet.

*edit* Dataguard has been up and running just fine, for over 2 weeks now since I uncompressed all the database objects. Oracle development must be very close to what they think is a fix, based on my SR/bug. But I’m not putting compression back in anytime soon.

July 22, 2009

Advanced Compression … not ready for prime time?

Filed under: 11g, advanced compression, Oracle — Tags: — kkempf @ 7:50 pm

First, a little background. We have a highly customized bolt-on application in 11i for automated data collection (barcode scanning) and some label generation which we uniformly despise as an IT group. Functionally, it does the job well enough, and makes the job on the floor more accurate, faster, and in truth probably a bit easier for the manufacturing workload. We despise it from an IT perspective because it’s buggy, runs on a Windows (read: inherently unreliable) application server, and appears to be built on some combination of DOS, VB, and dot net framework. It requires 2 additional Oracle databases to run (the ERP, somehow, manages to run on only one…) and if I had to guess has caused 95+% of our unplanned downtime in the last 2 years. Our favorite part is that it has a sort of ad-hoc query component which can cause some of the worst explain plans I’ve ever seen. It was no surprise, then, that today it caused me more grief.

EM started showing high “concurrency” contention that tied to an object in our custom schema which clearly made this bolt-on the culprit. More technically, this was buffer busy waits, and I couldn’t get a good grasp on what had changed. As per standard operating procedure, I restarted the application server (which often fixes problems since it’s so unstable) but this failed to shake the issue. I then bounced the Windows application server it runs on, but this, too, to no avail. It was churning through DML transactions with many blocking sessions and buffer waits.

When I finally got a look at the alert log, I see this entry (note: this is not a typo, it really says “Compressionion“):

ORA-00600: internal error code, arguments: [OLTP Compressionion Block Check], [5], [], [], [], [], [], [], [], [], [], []

It spit out a trace file which was effectively unreadable to me (hex and block dumps, no human readable text), and tkprof showed nothing. On a hunch, I uncompressed the table at the center of this issue. For now, this seemed to have resolved the issue. I don’t think I’d describe the nature of the updates to this table as OLTP, but, to be fair, it is a reasonably volatile table which was compressed, and I thought this may be the issue. Despite myself, I opened an SR with the alert log, and trace file, just to see what Oracle thought. No word yet, but when I searched Metalink/My Oracle Support for the term “OLTP Compressionion Block Check” or, what I thought was the correct error, “OLTP Compression Block Check” I had 0 hits. None. Guess we’re on the bleeding edge of Advanced Compression. Seriously, what the heck is a Compressionion? It’s almost laughable, are we so far on the cutting edge even the error messages/code blocks haven’t been proofread?

July 21, 2009

LogMiner & Advanced Compression

Filed under: 11g, advanced compression, LogMiner, Oracle — Tags: , — kkempf @ 8:06 pm

Because my code fix from Oracle doesn’t seem imminent, I decided to take a look at the offending archivelog from LogMiner’s perspective today, and see if I could glean any useful knowledge about what it was trying to do when it killed the Dataguard apply service.  If nothing else, this serves as a good demo of how to see the contents of one archivelog in any environment; it’s not something I’ve used since RDBMS 8i days, and the Oracle documentation is pretty disorganized (see here).

Here are the “knowns” going into this experiment, which were easily obtained by looking at the alert log on my standby:

  1. Archive log sequence# 23676 causes Dataguard apply to crash
  2. The segment in question is BOM.CST_ITEM_COST_DETAILS

I started by restoring the “offending” archivelog, then invoking LogMiner and adding it to the current LogMiner worklist:

exec dbms_logmnr.add_logfile(logfilename => ‘/usr/tmp/PROD00010681000855__23676.arc’, options =>;

Next I started LogMiner, using the existing dictionary from the online catalog (I tried a few other ways, such as a flat file export, but this was easiest):

exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog);

Now I wanted to see what operation was going on at the time of the block corruption:

  seg_owner = 'BOM'
  seg_name = 'CST_ITEM_COST_DETAILS'

SEG_OWNER  SEG_NAME                  OPERATION       SQL_REDO             SQL_UNDO
---------- ------------------------- --------------- -------------------- --------------------
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
827 rows selected.

Yep, totally useless, 100% “Unsupported” value.   It had been awhile since I’d used LogMiner, so I wanted to be sure I was doing it right; I took a (short) but otherwise random SCN and received expected results:

 sql_redo != 'Unsupported'
 scn = 5975016494077

SEG_OWNER             SCN SEG_NAME    OPERATION       SQL_REDO             SQL_UNDO
---------- -------------- ----------- --------------- -------------------- --------------------
APPLSYS     5975016494077 FND_LOGINS  INSERT          insert into "APPLSYS delete from "APPLSYS
                                                      "."FND_LOGINS"("LOGI "."FND_LOGINS" where
                                                      N_ID","USER_ID","STA  "LOGIN_ID" IS NULL
                                                      RT_TIME","END_TIME", and "USER_ID" IS NUL
                                                      "PID","SPID","TERMIN L and "START_TIME" I
                                                      AL_ID","LOGIN_NAME", S NULL and "END_TIME
                                                      "SESSION_NUMBER","SU " IS NULL and "PID"
                                                      BMITTED_LOGIN_ID","S IS NULL and "SPID" I
                                                      ERIAL#","PROCESS_SPI S NULL and "TERMINAL
                                                      D","LOGIN_TYPE","SEC _ID" IS NULL and "LO
                                                      URITY_GROUP_ID") val GIN_NAME" IS NULL an
                                                      ues (NULL,NULL,NULL, d "SESSION_NUMBER" I
                                                      NULL,NULL,NULL,NULL, S NULL and "SUBMITTE
                                                      NULL,NULL,NULL,NULL, D_LOGIN_ID" IS NULL
                                                      NULL,NULL,NULL);     and "SERIAL#" IS NUL
                                                                           L and "PROCESS_SPID"
                                                                           IS NULL and "LOGIN_
                                                                           TYPE" IS NULL and "S
                                                                           ECURITY_GROUP_ID" IS
                                                                           NULL and ROWID = 'A

At this point, I wasn’t sure what the relationship was between Unsupported and object types; a little digging around on Metalink and Doc 282994.1 makes it clear that certain actions and certain object types threw this message, but none of them were specific to 11g RDBMS. This in addition to the fact that the document was ancient made me keep looking.

I had a hunch that compressed tables were the cuplrit, and that they simply couldn’t display DDL information because of compression. However when I did a little more querying I found that a compressed table usually shows Unsupported under Redo and Undo DDL, but not necessarily always. By contrast, uncompressed objects would usually contain DDL but sometimes also show Unsupported in the Redo and Undo DDL columns of the view. I’m going to see if there’s a tie to the action type tomorrow, and I’ll update the post.

*edit* “Official” word from support is as so:

LogMiner does not support these datatypes and table storage attributes:
– BFILE datatype
– Simple and nested abstract datatypes (ADTs)
– Collections (nested tables and VARRAYs)
– Object refs
– Tables using table compression
– SecureFiles

This fails to explain why I had some advanced compression objects which did have visible redo/undo, but I grow tired of dealing with support.

Older Posts »

Blog at