Kevin Kempf's Blog

December 9, 2015

FND_LOBS and Securefiles (Part II)

Filed under: 11g, 12c, R12, R12.2 — kkempf @ 3:03 pm

An Actionable Plan

Last post, I talked about how I was playing with securefiles, fnd_lobs, deduplication and advanced compression.  It took a bit of tweaking, but I think I can safely share my success at reigning in FND_LOBS.  I’m going to outline the process of an online conversion from FND_LOBS defined as a basicfile (the default) to securefiles.

Jumping off a Bridge

Ever see Oracle do a presentation and the first slide is legalese called a Safe Harbor which basically says everything you are about to see and hear is subject to change and no business plans should be made on assumptions or potential future releases?  I feel like I should make a similar warning, let’s call it a bridge jumper.  Remember the old saying, “If all your friends jumped off a bridge, would you do it too?”.  Here’s what I’ll say: This works in DEV on EBS version R12.2, and I’ve done some minimal functional testing.  Don’t do it in PROD until you prove that it doesn’t impact something I forgot about or can’t foresee.  I recommend a regression pass.  Also don’t use products you’re not licensed for, such as advanced compression, even in non-production environments.

References

http://docs.oracle.com/cd/B28359_01/appdev.111/b28393/adlob_smart.htm

http://www.oracle.com/technetwork/articles/sql/11g-securefiles-084075.html

Just the Highlights

You can convert your FND_LOBS table via online redefintion from basicfiles to securefiles.  In doing so, you will define the new lob table as securefile storage, and if you use deduplication and compression clauses, you will probably save a lot of space.  I did, dropping from 150GB to about 50GB.

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
SQL> /
OWNER                   TABLE_NAME              COLUMN_NAME                     SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ --------------------------------------------------
APPLSYS                FND_LOBS               FILE_DATA                      SYS_LOB0000034032C00004$$        149115.25

By the Numbers

I really wish WordPress would let me paste code in better, as I could make this a lot easier on the eyes.  Well you get what you get.

1. Increase the size of your APPS_TS_MEDIA tablespace until you can hold at least another table some percentage of the size of your FND_LOBs lob segment.  See my prior post to figure out how to determine this.

2. Create FND_LOBS_NEW (yeah, creative name right?) using the same syntax as your old table.  Note deduplicate, compress high clauses.

create
  table applsys.fnd_lobs_new
  (
    file_id          number
  ,file_name         varchar2( 256)
  ,file_content_type varchar2( 256) not null
  ,file_data blob
  ,upload_date     date
  ,expiration_date date
  ,program_name    varchar2( 32)
  ,program_tag     varchar2( 32)
  ,language        varchar2( 4) default( userenv( 'LANG') )
  ,oracle_charset  varchar2( 30) default( substr( userenv( 'LANGUAGE'),instr( userenv( 'LANGUAGE'),'.') + 1) )
  ,file_format     varchar2( 10) not null
  ,primary key( file_id ) validate
  )
  tablespace apps_ts_media pctfree 10initrans 10maxtrans 255storage
  (
    initial 128k buffer_pool default
  )
  logging nocompress lob
  (
    file_data
  )
  store as securefile
  (
    deduplicate compress high tablespace apps_ts_media chunk 8192storage( initial 128k buffer_pool default) pctversion 0nocache logging
  ) ;

3. Take your new table out of logging and enable parallel dml in an effort to save time:

  • alter table applsys.fnd_lobs_new nologging;
  • alter session force parallel dml parallel 4;
  • set timing on

4. Map the columns to the new table (this ran about 12 hours for me)

declare
  l_col_mapping varchar2(1000);
begin
  l_col_mapping :=
     'file_id file_id, '
   ||'file_name file_name, '
   ||'file_content_type file_content_type, '
   ||'file_data file_data, '
   ||'upload_date upload_date, '
   ||'expiration_date expiration_date, '
   ||'program_name program_name, '
   ||'program_tag program_tag, '
   ||'language language, '
   ||'oracle_charset oracle_charset, '
   ||'file_format file_format'
   ;
   dbms_redefinition.start_redef_table ('APPLSYS', 'FND_LOBS', 'FND_LOBS_NEW', l_col_mapping)
   ;
end;
/

5. Drop the existing index on APPLSYS.FND_LOBS_CTX (intermedia index). At this point, you may experience adverse performance on FND_LOBS for some users, but this index is of the insides of the LOBs as I understand it, so I doubt anyone will notice (your primary key remains available).  If you don’t do this, the next step fails spectacularly

drop index applsys.fnd_lobs_ctx;

6. Begin the redefinition process.  This ran really fast for me.

declare
   l_error_count pls_integer := 0;
begin
   dbms_redefinition.copy_table_dependents('APPLSYS', 'FND_LOBS', 'FND_LOBS_NEW', 1, TRUE, TRUE, TRUE, FALSE, l_error_count)
   ;
   dbms_output.put_line('Errors Occurred := '|| to_char(l_error_count))
   ;
end;
/

7. Finish the redefinition process.  Also very fast.

begin 
    dbms_redefinition.finish_redef_table
       ('ARUP', 'FND_LOBS', 'FND_LOBS_NEW');
end;
/

8. Confirm it worked

select securefile
from dba_lobs
where table_name = 'FND_LOBS'
/
SEC
---
YES 

9. Space check

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
OWNER                   TABLE_NAME              COLUMN_NAME             SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ ------------------------------ ------------------------------
APPLSYS                FND_LOBS               FILE_DATA              SYS_LOB0008283556C00004$$            46987

Elapsed: 00:00:11.64

10. Drop the temp table

drop table fnd_lobs_new;

11. Recreate the index APPLSYS.FND_LOBS_CTX from the EBS script.  This ran about 5 hours for me.

cd $FND_TOP/sql
sqlplus apps
SQL> @aflobbld.sql APPLSYS APPS

12. Gather statistics

EXEC FND_STATS.GATHER_TABLE_STATS(OWNNAME => ‘APPLSYS’,TABNAME => ‘FND_LOBS’);

Parting Shot

It occurs to me during cleanup, that if you create your “temporary” FND_LOBS table (in my case, FND_LOBS_NEW) in a new tablespace instead of apps_ts_media, once you’re done you could probably reclaim all the disk allocated for this one-time project. The alternative is that your apps_ts_media is now 3x (or more) what you need, as you eliminated your large, basicfiles FND_LOBS table, and now have a compressed and deduplicated version on securefiles which will presumably take a long, long time to fill up your apps_ts_media tablespace now that you’ve expanded it for the project. Because of high water marks, I was unable to shrink my tablespace in any significant way after I was done.

That’s It Folks

I tested this a few ways from the functional standpoint.  I ran a report which printed a LOB (Invoice) which was part of a requisition.  I opened an attachment in Quality.  I saved an attachment in Quality.  Everything checked out.  On to a full regression cycle!  Thanks for tuning in, let me know if you find this useful.

December 4, 2015

FND_LOBS and Securefiles (Part I)

Filed under: 11g, R12.2 — kkempf @ 11:05 am

FND_LOBS, how I hate thee

In our EBS environment, users have filled up FND_LOBS with attachments for the past 10 years.  From a functional standpoint, this is mostly quotes (.pdf’s) attached to purchase orders, and M$ Office documents attached to customer orders.  It grows without bounds, and in most cases there’s no expiration date included in the table or means to purge it with any reassurance that I’m not breaking something.  During a database 12c class last summer in Chicago, I learned some of the benefits of securefiles and it seemed like something to look into.  Then I was at a conference in Raleigh in September and it came up again.  Seeing a trend, I decided to start playing around with it (finally, yes, I know this is not new technology!).

Securefiles

Securefiles are like lobs 2.0 in Oracle.  It’s their attempt to better manage unstructured data within Oracle, and it’s a core database feature, having nothing directly to do with EBS.  The best document I found on it was here.  Here’s what caught my attention: it can leverage deduplication and advanced compression.  I didn’t see any easy way to see what % of my FND_LOBS content was duplicate, but let’s be honest, users don’t care if they upload the same thing 10 times, it’s not their problem.

Disclaimer

As usual, check with your sales rep before you do this stuff in production.  Some of what I’m doing requires a license for Advanced Compression on your core database, and that’s not cheap.

Sizing up the Problem

A quick query shows that the fnd_lobs table (really, it’s associated lob segment) is consuming 150 GB.  That may not seem staggering, except multiply it by all EBS instances, add dataguard, and account for increased backup times and sizes, and it gets my attention.  Also take into account that this one object accounts for ~15% of my database size.  It’s been on my radar for a long time.

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
from dba_lobs a, dba_segments b
where a.segment_name = b.segment_name
and a.owner = b.owner
and a.table_name like ‘FND_LOBS%’
order by b.bytes/1024/1024 desc
/

APPLSYS    FND_LOBS    FILE_DATA    SYS_LOB0000034032C00004$$    149115.25
APPLSYS    FND_LOBS_DOCUMENT    BLOB_CONTENT    SYS_LOB0000034093C00007$$    25
APPLSYS    FND_LOBS_DOCUMENT    CONTENT    SYS_LOB0000034093C00008$$    0.125

A Quick and Dirty Test Case

I wanted to see what my space savings was going to be if I converted this table to securefiles.  Thus I added a little space to apps_ts_media (well, a lot!) and created a table called fnd_lobs_new to look just like fnd_lobs, except it used securefiles as it’s lob storage choice, and high compression.

create
  table applsys.fnd_lobs_new
 (file_id          number
 ,file_name         varchar2( 256)
 ,file_content_type varchar2( 256) not null
 ,file_data blob
 ,upload_date     date
 ,expiration_date date
 ,program_name    varchar2( 32)
 ,program_tag     varchar2( 32)
 ,language        varchar2( 4) default( userenv( 'LANG') )
 ,oracle_charset  varchar2( 30) default( substr( userenv( 'LANGUAGE'),instr( userenv( 'LANGUAGE'),'.') + 1) )
 ,file_format     varchar2( 10) not null
 ,primary key( file_id ) validate
 )
 tablespace apps_ts_media pctfree 10 initrans 10 maxtrans 255 storage
 (
 initial 128k buffer_pool default
 )
 logging nocompress lob
 (
 file_data
 )
 store as securefile
 (
 deduplicate compress high tablespace apps_ts_media chunk 8192 storage( initial 128k buffer_pool default) pctversion 0 nocache logging
 ) 
 ;
alter table applsys.fnd_lobs_new nologging;
insert into applsys.fnd_lobs_new select * from applsys.fnd_lobs;  (this takes a LONG time.  As in 12+ hours for me)
select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
SQL> /
OWNER                   TABLE_NAME              COLUMN_NAME                     SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ -------------------------------------------------- ------------------------------ -----------------
APPLSYS                FND_LOBS               FILE_DATA                      SYS_LOB0000034032C00004$$        149115.25
APPLSYS                FND_LOBS_NEW              FILE_DATA                      SYS_LOB0008283485C00004$$        46989.875
APPLSYS                FND_LOBS_DOCUMENT          BLOB_CONTENT                     SYS_LOB0000034093C00007$$               25
APPLSYS                FND_LOBS_DOCUMENT          CONTENT                         SYS_LOB0000034093C00008$$             .125

Bottom Line

I reduced fnd_lobs from 150GB to 47GB.  That’s a win!

Next Steps

My test isn’t practical.  I simply wanted to see if there was enough gain to continue with an online conversion.  There is.  Next post I hope to show you exactly how to convert FND_LOBS to securefiles without any downtime.

October 29, 2015

Fun with adop

Filed under: 11g, R12.2 — kkempf @ 1:58 pm

Sphincter said what?

I was doing some core ETCC database patching against a 12.2 DEV instance today.  I was hoping to get to AD/TXK Delta 7 and see how that worked.  But after a big opatch session, when I restarted services, adop phase=prepare threw this error:

Validating system setup…
[ERROR]     Failed to execute SQL statement :
select AD_ZD_ADOP.GET_INVALID_NODES() from dual
[ERROR]     Error Message :
[ERROR]     Failed to execute SQL statement :
select AD_ZD_ADOP.GET_INVALID_NODES() from dual
[ERROR]     Error Message :
[UNEXPECTED]Nodes “-1” are listed in ADOP_VALID_NODES table but not in FND_NODES table.
[UNEXPECTED]To correct this, run AutoConfig on nodes “-1”
[UNEXPECTED]Error while checking if this is a multi node instance

MOS had a direct hit on this (1988581.1):  “Nodes “-1″ are listed in ADOP_VALID_NODES table but not in FND_NODES table.”, but it didn’t fix the issue.  In fact in accordance with this note I was fine.

The Fix

On the database tier, go to $ORACLE_HOME/appsutil/admin and re-run adgrants, then recompile the package:

sqlplus / as sysdba

@adgrants apps

alter package apps.AD_ZD_ADOP compile body;

After this, adop phase=prepare ran happily enough, and included an explicit autoconfig.  I can’t promise this will work for you, but since the error message is rather vague and unhelpful I thought I’d post this.

October 28, 2015

RDBMS 11g vs. 12c

Filed under: 11g, 12c — kkempf @ 10:12 am

A bit of history

Last June, I wrote up a short summary of support implications for Database 12c.  Given that it’s Open World week, I thought I’d see if there were any relevant announcements in regards to this.  I went back to Document 742060.1 and was delighted to see a change since I’d last looked.  In June of 2015 it read:

NOTE: The fee for the first year of Extended Support for 11.2 has been waived for all customers. The Premier Support timeline has been extended one year to show that customers do not need to pay the Extended Support fee to receive Error Correction Support for that first year.

Now, it reads:

NOTE: The fee for Extended Support for 11.2 has been waived for all customers through 31 May 2017. No customer action is required to take advantage of Extended Support during this time.” and “Extended Support fees waived until May 31, 2017. An ES service contract is required starting 1-Jun-2017.

 

Roadmap_October_2105

There’s no change log for the doc so I can’t tell when Oracle made this decision. Some of this appears to be semantics; instead of granting more “Premier support extensions” they’re delivering “Free Extended Support”.  While there is a subtle difference there, the bottom line is that if you’re on 11.2.0.4, you have a little breathing room to get to 12c!

Why does this matter?

For EBS customers like me, Release 12.2 wasn’t even certified with RDBMS 12.1.0.2 and Dataguard until August 2015.  This caused much angst for me, as I had two options: pay more to stay on 11.2 extended support, or rush into 12.1.0.2 and hope for the best in the 4 months Oracle gave me between the certification in August of 2015 and the January 2016 “end of free first year of extended support”.

Thanks, Oracle, for doing the right thing and extending the window for 11.2!

September 3, 2015

More fun with 12c: Online datafile moves

Filed under: 12c — kkempf @ 1:50 pm

A long time coming

I think I’ve been wondering why I couldn’t do this since 9i

SQL> select name, enabled from v$datafile where name like '%vertex%';
NAME                                        ENABLED
--------------------------------------------------------------------------- ----------
/u05/appdev/devdata/vertex01.dbf                        READ WRITE


SQL> alter database move datafile '/u05/appdev/devdata/vertex01.dbf' to '/u06/appdev/devdata/vertex01.dbf';

Database altered.

SQL> select name, enabled from v$datafile where name like '%vertex%';
NAME                                        ENABLED
--------------------------------------------------------------------------- ----------
/u06/appdev/devdata/vertex01.dbf                        READ WRITE


SQL>

Oracle RDBMS 12c table restore

Filed under: 12c, RMAN — kkempf @ 7:37 am

Database 12.1.0.2

So I’m playing with 12c in a development environment with EBS 12.2.3, trying to understand some of the new features.  I don’t use FRA, because as an EBS DBA, there’s not much logic in restoring only one table back, or (worse) flashing the whole ERP back.  That said, this new RMAN feature is pretty cool, and best of all, it requires no effort on my part.

Custom Table

Nothing special here, just a custom schema table (yes, I removed the schema name references from this post so as to “anonymize” it a bit).  I get a row count, the current SCN, and then truncate the table

SQL*Plus: Release 12.1.0.2.0 Production on Wed Sep 2 16:35:16 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Enter password:
 Last Successful login time: Wed Sep 02 2015 16:34:56 -04:00
Connected to:
 Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select count(*) from label_data;
COUNT(*)
 ----------
 232
SQL> column current_scn format 999999999999999
 SQL> select sysdate, current_scn from v$database;
SYSDATE        CURRENT_SCN
 --------- ----------------
 02-SEP-15    5982678044431
SQL> truncate table label_data;
Table truncated.

RMAN Restore/Recover of a Table

Now let's fix it with the new RMAN 12c feature, doing the work in /u05/appdev/restore and renaming the table to restored_label_data.  
The long and short of it is that RMAN does all the work for you.  It brings up a minimal instance, restores the table, then exports it 
into the restored table you specified, then wipes out the minimal instance.  It took awhile, about 30 minutes, but the main database 
was available the whole time.  I've removed some messages to save space, but you get the gist of it.  I suppose in some abstract and 
sampling manner, this also serves to test your RMAN backup to a degree.

RMAN> recover table label_data until scn 5982678044431 auxiliary destination '/u05/appdev/restore' remap table label_data:restored_label_data;
Starting recover at 02-SEP-15
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=425 device type=DISK
 RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time
List of tablespaces expected to have UNDO segments
 Tablespace SYSTEM
 Tablespace APPS_UNDOTS1
Creating automatic instance, with SID='degz'
initialization parameters used for automatic instance:
 db_name=DEV
 db_unique_name=degz_pitr_DEV
 compatible=12.1.0
 db_block_size=8192
 db_files=1000
 diagnostic_dest=/u01/appdev/oracle/devdb/12.1.0.2/log
 _system_trig_enabled=FALSE
 sga_target=2560M
 processes=200
 db_create_file_dest=/u05/appdev/restore
 p log_archive_dest_1='location=/u05/appdev/restore' #No auxiliary parameter file used
starting up automatic instance DEV
Oracle instance started
Total System Global Area    2684354560 bytes
Fixed Size                     3714440 bytes
 Variable Size                654312056 bytes
 Database Buffers            2013265920 bytes
 Redo Buffers                  13062144 bytes
 Automatic instance created
contents of Memory Script:
 {
 # set requested point in time
 set until  scn 5982678044431;
 # restore the controlfile
 restore clone controlfile;
# mount the controlfile
 sql clone 'alter database mount clone database';
# archive current online log
 sql 'alter system archive log current';
 }
 executing Memory Script
executing command: SET until clause
Starting restore at 02-SEP-15
 allocated channel: ORA_AUX_DISK_1
 channel ORA_AUX_DISK_1: SID=2 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
 channel ORA_AUX_DISK_1: restoring control file
 channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/c-3959101112-20150902-01
 channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/c-3959101112-20150902-01 tag=TAG20150902T012532
 channel ORA_AUX_DISK_1: restored backup piece 1
 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
 output file name=/u05/appdev/restore/DEV/controlfile/o1_mf_bygqo9w1_.ctl Finished restore at 02-SEP-15
sql statement: alter database mount clone database
sql statement: alter system archive log current
contents of Memory Script:
 {
 # set requested point in time
 set until  scn 5982678044431;
 # set destinations for recovery set and auxiliary set datafiles
 set newname for clone datafile  1 to new;
 set newname for clone datafile  2 to new;
 set newname for clone datafile  3 to new;
 set newname for clone datafile  4 to new;
 set newname for clone datafile  5 to new;
 set newname for clone datafile  295 to new;
 set newname for clone datafile  314 to new;
 set newname for clone datafile  8 to new;
 set newname for clone datafile  11 to new;
 set newname for clone datafile  288 to new;
 set newname for clone datafile  392 to new;
 set newname for clone datafile  20 to new;
 set newname for clone datafile  106 to new;
 set newname for clone tempfile  1 to new;
 set newname for clone tempfile  2 to new;
 set newname for clone tempfile  3 to new;
 set newname for clone tempfile  5 to new;
 # switch all tempfiles
 switch clone tempfile all;
 # restore the tablespaces in the recovery set and the auxiliary set
 restore clone datafile  1, 2, 3, 4, 5, 295, 314, 8, 11, 288, 392, 20, 106;
switch clone datafile all;
 }
 executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file 
renamed tempfile 2 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file 
renamed tempfile 3 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file 
renamed tempfile 5 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file
Starting restore at 02-SEP-15
 using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
...
contents of Memory Script:
{
# set requested point in time
set until  scn 5982678044431;
# online the datafiles restored or switched
sql clone "alter database datafile  354 online";
# recover and open resetlogs
recover clone database tablespace  "DATA", "SYSTEM", "APPS_UNDOTS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  354 online

Starting recover at 02-SEP-15
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00354: /u05/appdev/restore/TQFR_PITR_DEV/datafile/o1_mf_data_bygtnmkg_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/6sqfujuo_1_1
channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/6sqfujuo_1_1 tag=BACKUP_DEV_INCR_083115010001
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00354: /u05/appdev/restore/TQFR_PITR_DEV/datafile/o1_mf_data_bygtnmkg_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/93qg197k_1_1
channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/93qg197k_1_1 tag=BACKUP_DEV_INCR_090115010003
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00354: /u05/appdev/restore/TQFR_PITR_DEV/datafile/o1_mf_data_bygtnmkg_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/baqg3so8_1_1
channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/baqg3so8_1_1 tag=BACKUP_DEV_INCR_090215010002
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

archived log for thread 1 with sequence 4156 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4156.log
archived log for thread 1 with sequence 4157 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4157.log
archived log for thread 1 with sequence 4158 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4158.log
archived log for thread 1 with sequence 4159 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4159.log
archived log for thread 1 with sequence 4160 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4160.log
archived log for thread 1 with sequence 4161 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4161.log
archived log file name=/usr/local/oracle/archive/DEV1880762303__4156.log thread=1 sequence=4156
archived log file name=/usr/local/oracle/archive/DEV1880762303__4157.log thread=1 sequence=4157
archived log file name=/usr/local/oracle/archive/DEV1880762303__4158.log thread=1 sequence=4158
archived log file name=/usr/local/oracle/archive/DEV1880762303__4159.log thread=1 sequence=4159
archived log file name=/usr/local/oracle/archive/DEV1880762303__4160.log thread=1 sequence=4160
archived log file name=/usr/local/oracle/archive/DEV1880762303__4161.log thread=1 sequence=4161
media recovery complete, elapsed time: 00:00:01
Finished recover at 02-SEP-15
...
ORA-06512: at "SYS.DBMS_METADATA", line 10261
   EXPDP> . . exported "LABEL_DATA"                    207.4 KB     232 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tqFr_jxbe" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tqFr_jxbe is:
   EXPDP>   /u05/appdev/restore/tspitr_tqFr_90139.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tqFr_jxbe" completed with 10 error(s) at Wed Sep 2 17:35:20 2015 elapsed 0 00:01:31
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tqFr_Bipd" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tqFr_Bipd":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "RESTORED_LABEL_DATA"           207.4 KB     232 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tqFr_Bipd" successfully completed at Wed Sep 2 17:36:40 2015 elapsed 0 00:00:48
Import completed

sqlplus apps@dev Enter password: 
Last Successful login time: Thu Sep 03 2015 08:15:09 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from RESTORED_LABEL_DATA;

  COUNT(*)
----------
       232

SQL>



August 26, 2015

Fixing Enterprise Manager 12c Timeouts

Filed under: Enterprise Manager — kkempf @ 11:08 am

EM Timeout

Old Habits Die Hard

I have a monitor devoted almost exclusively to the performance page on my old EM 11g, still monitoring my main E-Business Suite production instance.  I can see at a glance if the system is busy, has blocking locks, massive I/O, etc.  It just feels comfortable and gives me a quick status at a glance.

Enter Enterprise Manager 12c.  I installed 12.1.0.5 and have it monitoring everything but my most important environments at this point.  It took a bit of getting used to, but eventually I got the hang of it and worked out the handful of issues I had with it.  The biggest problem I had from a usability standpoint was that it keeps timing out on the performance home page I’m trying to keep running on my 3rd monitor.  There’s even a checkbox which says “Disable Timeout on Performance Home Page”, but apparently it doesn’t do anything.  And yes, I’m aware of the security issues and why it’s in place; in my case this is not a concern (physical security > application security).

If you scour My Oracle Support, you may come across Doc ID 1644004.1, which tells you how to increase the timeout.  What it doesn’t tell you is that if you set the value of oracle.adf.view.rich.poll.TIMEOUT to -1, you disable the timeout altogether.

Not Quite Done

Nothing is ever simple.  Apparently after making this change, EM defaults to a 45 minute value for oracle.sysman.eml.maxInactiveTime which results in this screen

EMTimeout2

So you need to do this also:

emctl set property -name oracle.sysman.eml.maxInactiveTime -value -1 -sysman_pwd [your sysman password]

emctl stop oms

emctl start oms

June 24, 2015

Database 12c: Support, Installation Types, Costs and Clarifications

Filed under: 12c, Licensing — kkempf @ 10:45 am

Oracle-Database-12c

What the heck is this all about?

This is a bit off my usual rambling path, but because I think it’s important to understand I thought it a useful diversion.  Database 12c is out there, and hopefully you’re moving towards it.  That 11.2 database that you love because it works reliably?  It’s got one foot in the support grave.  So I started to research all of the implications and thought I might save you some time by summarizing what I found here.

Support Expiry

What’s the difference between premier, extended and sustaining?  You can find the official information here.

Premier

In short, I interpret premier as the “normal” state, where you can get support at your expected cost, security updates come out regularly, you can log bugs and support will create fixes as required.

Extended

When you wait too long to upgrade, you fall into the extended support bucket.  You lose some things.  For example, Oracle stops certifying products against this older release.

Sustaining

You’re way behind the curve.  Critical patches, and other rather critical updates stop rolling out unless they existed before sustaining support began.  Not where you want to be in production unless you have no other choice.

How much time do I have to migrate to 12c database?

Not much, unless Oracle throws out another premier support extension.  You can see the official GA information here(see page 4 for database); Oracle bases the support windows on what the General Availability (GA) date was for the release.  The bottom line is that as it stands right now, 11.2 is already out of premier support as of January, 2015.  Good news: they waived that for 1 year.  “NOTE: The fee for the first year of Extended Support for 11.2 has been waived for all customers. The Premier Support timeline has been extended one year to show that customers do not need to pay the Extended Support fee to receive Error Correction Support for that first year.

Now you have until January, 2016 to get to 12c, or you fall on extended support.  Who cares?  What’s the difference?  You better contact your sales rep, but I know in the past I’ve paid a support surcharge to stay on an older release (I didn’t have a choice, my 3rd party application wasn’t certified on the latest release yet) during extended support.  Here’s the official dates from Oracle:

RDBMS 11gR2 RDBMS 12c
GA Date September 2009 June 2013
Premier Support Ends January 2015 July 2018
Extended Support Ends January 2018 July 2021
Sustaining Support Indefinite Indefinite

Database 12c: Standard Edition One Versus Enterprise Edition, Container vs. Single Instance

This caused me a lot of confusion so I thought I’d clarify some things.  For anyone with their head in the sand, database 12c introduces multi-tenant.  In a nutshell, this means you can run multiple databases in one instance, and share the data dictionary overhead in the container (system, sysaux, and more) while the data lives in the pluggable.  It’s way more complicated than that, I know, but that’s the gist of it.

Standard Edition One Standard Edition Enterprise Edition Container Install In-Memory Feature
12.1.0.1 Yes Yes Yes Optional No
12.1.0.2 No No Yes Optional Yes

So what does this mean?  As of right now, if you’re running Standard Edition One 11g and trying to get to 12c, you cannot install 12.1.0.2, you must use 12.1.0.1.  It doesn’t ask you what version you want to put down, it only does Enterprise Edition!  During the process, the GUI asks you if you want to create the database as a container database.  If you’re uninterested in multi-tenant (or can’t afford it.. more on that in a moment) you can simply not check that box and you get what Oracle now refers to as a “Single Instance” database.

It gets more interesting with Standard Edition One.  I have a handful of production databases running on SE1, because I want them isolated from my big production instance, they don’t do much, etc.  Well why not consolidate these under one multi-tenant SE1 environment and make my life easier?  Because you’re not allowed to.  SE1 allows 1 container and 1 pluggable per license.  Why anyone would do that is a mystery to me, but that was the answer I received from Oracle sales.  I don’t know for certain if that also applies to Standard Edition but for Standard Edition One, you can’t run multiple pluggables.  Did I mention SE1 was designed to compete with M$ SQL Server and they’ve been doing their version of multi-tenant for years?

Break out your wallet

If you want to use any of the cool new features of 12c, including multi-tenant, bring your wallet to the negotiating table.  You can find list pricing here.  If you click on US Technology Commercial Price List, you will see that two of the most core, critical features of 12c come with an added cost:

Add On Cost Items

Multitenant In Memory
Named User Plus $350 $460
Software Update License & Support $77 $101.20
Processor License $17,500 $23,000
Software Update License & Support $3850 $5060

These two aren’t cheap.  Multitenant costs about an extra 33% of the base database price, and In-Memory almost 50%.  If you don’t know what In-Memory is, I’m not talking about the buffer cache.  You need to check it out, it’s amazing.

How does this pertain to EBS?

It’s complicated.  If you look at the good work being done by Steven Chan, you can see 12.1.0.2 was officially certified with EBS 12.1 last fall, and if you look at certify at support you can see for EBS 12.2 there are some “lagging” certifications (most notably dataguard, active dataguard, transportable tablespaces).

certs

Bottom line: do your homework, your mileage may vary.  I’d like to think that because these certifications came well after the RDBMS 12.1.0.2 GA date, Oracle might be generous and buy us EBS customers an exception with more time on 11gR2 or 12c, but you’d be foolish to count on it.

Parting Shot

This page includes tabs to downloadable .pdf’s of the 12c database architecture diagrams, performance/data dictionary view, background process descriptions, multitenant architecture, etc.

April 28, 2015

Security Patch Releases with EBS: Mission Impossible

Filed under: R12.2, Security — kkempf @ 12:33 pm

Regression Test Time

We’re about to enter our first R12.2 regression test.  High on my list is to get to Delta 6 on the AD/TXK side, and since the security patches just came out I figured I’d get up to date there.  For pretty good reason, we’re still running 11.2.0.4 base, and it took me some time to get the ETCC (patch 17537119) happy that I’ve got all the required patches.

Per 1967243.1 I figured I’d start with the core database.  Now we have combo PSU’s for database, OJVM, and GI, in addition to SPU’s and stand-alone PSU’s for each component.  It’s gotten complicated since I looked last!  The patches are all opatch installed; ideally I’d get the OJVM, SPU and PSU applied to the database home.

OJVM (20406239)

This is a documented problem on MOS.  Apparently, it’s not really cumulative, because I have to be on Oct 2014 or better security set to apply this.  Next.

The following make actions have failed :

Re-link fails on target “jox_refresh_knlopt ioracle”.

Do you want to proceed? [y|n]
n
User Responded with: N

 

PSU (20299013)

This one was even worse!

There are no patches that can be applied now.

Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
20488666, 20299013, 19791273, 19730032, 18260550, 17420796

Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
17811789, 19393542, 18828868, 18614015, 17892268, 17600719, 17468141, 16992075, 16929165

Following patches will be rolled back from Oracle Home on application of the patches in the given list :
20488666, 17811789, 19791273, 19730032, 19393542, 18260550, 18828868, 17420796, 18614015, 17892268, 17600719, 17468141, 16992075, 16929165

Conflicts/Supersets for each patch are:

Patch : 20299013

Bug Conflict with 20488666
Conflicting bugs are:
17912217 ETCC R12.2 requirement per 1594274.1

Bug Superset of 17811789
Super set bugs are:
17811789

Conflict with 19791273 ETCC R12.2 requirement per 1594274.1
Conflict details:
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktsp.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/kdc.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/kdt.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktsk.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktsp.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktsplb.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktspsrch.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktsp.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/kdt.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/ktsp.o

Bug Conflict with 19730032 ETCC R12.2 requirement per 1594274.1
Conflicting bugs are:
17174582,  18282562,  18244962,  17614134,  18674024,  17050888,  17478145,  18331850,  18964939,  17883081,  18436307

Bug Superset of 19393542
Super set bugs are:
19393542

Conflict with 18260550 ETCC R12.2 requirement per 1594274.1
Conflict details:
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/kzd.o
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/kzd.o

Bug Superset of 18828868
Super set bugs are:
18828868

Conflict with 17420796 ETCC R12.2 requirement per 1594274.1
Conflict details:
/u01/apptest/oracle/testdb/11.2.0.4/bin/lxinst

Bug Superset of 18614015
Super set bugs are:
18614015

Bug Superset of 17892268
Super set bugs are:
17892268

Bug Superset of 17600719
Super set bugs are:
17600719

Bug Superset of 17468141
Super set bugs are:
17468141

Bug Superset of 16992075
Super set bugs are:
16992075

Bug Superset of 16929165
Super set bugs are:
16929165

Security Patch Update 11.2.0.4.0 (20299015)

Surprise!

Conflicts/Supersets for each patch are:

Patch : 18203837

Conflict with 18260550 ETCC R12.2 requirement per 1594274.1
Conflict details:
/u01/apptest/oracle/testdb/11.2.0.4/lib/libserver11.a:/kzd.o

Patch : 19972566

Conflict with 18614015 ETCC R12.2 requirement per 1594274.1
Conflict details:
/u01/apptest/oracle/testdb/11.2.0.4/rdbms/admin/prvtutil.plb

Patch : 20506715

Conflict with 19730032 ETCC R12.2 requirement per 1594274.1
Conflict details:
/u01/apptest/oracle/testdb/11.2.0.4/lib/libgeneric11.a:/qcs.o

Patch : 20631274

Bug Superset of 17600719 ETCC R12.2 requirement per 1594274.1
Super set bugs are:
17600719

Following patches have conflicts: [   18260550   18203837   18614015   19972566   19730032   20506715 ]
Refer to My Oracle Support Note 1299688.1 for instructions on resolving patch conflicts.

Security or Compatibility: Why are these two at odds?

I’m trying to do the right thing, but Oracle is making it so hard to do that I’ve lost interest.  I don’t have a month to figure out what merged patches I need, what patch can go and what can stay. Why can’t there be a single note for EBS by version, with all patches for all components?  A database patch set which already deconflicted all the oddball ETCC requirements and delivered something which you could actually install out of the gate would make life much easier.

 

April 7, 2015

ADOP intelligent warnings, the ever-fluid recommended tech patches, and multiple versions of the truth

Filed under: R12.2 — kkempf @ 1:34 pm

So there I was..

Running adop phase=prepare in my production R12.2 environment, and I notice a warning on the screen and in the logs:

[WARNING]   The following required database bug fixes <19393542> are missing from this node, <database hostname>. Refer to My Oracle Support Knowledge Document 1594274.1 to identify the patch that delivers this bug fix.

This raises some questions:  How did adop know I needed a patch?  How serious is this?

Consolidated ≠ Updated

I went and pulled Consolidated List of Patches and Technology Bug Fixes (aka Doc ID 1594274.1) and it’s changed a bit since we went live.  Mind you, I expected to find patches in there we hadn’t applied as part of our go live; we locked our code last October before the upgrade.  What I didn’t expect was to find 14 patches on the database alone.  After some tedious cross checking of my inventory versus the document, I remembered checkDBpatch.sh (patch 17537119).

checkDBpatch.sh

Patch 17537119 is really cool. This little gem compares an .xml file in the patch with your inventory, and tells you what you’re missing.  Sweet!

$ unzip p17537119_R12_GENERIC.zip
Archive:  p17537119_R12_GENERIC.zip
inflating: README.txt
inflating: checkDBpatch.cmd
inflating: txk_R1220_DB_base_bugs.xml
inflating: checkDBpatch.sh
$ ./checkDBpatch.sh
+===============================================================+
|    Copyright (c) 2005, 2014 Oracle and/or its affiliates.     |
|                     All rights reserved.                      |
|               EBS Technology Codelevel Checker                |
+===============================================================+
Executing Technology Codelevel Checker version: 120.23
Enter ORACLE_HOME value : /u01/appprod/oracle/proddb/11.2.0.4
Is this a RAC environment [y/n] : n
Enter ORACLE_SID value : PROD
Bugfix XML file version: 120.0.12020000.20
Proceeding with the checks…
Getting the database release …
Setting database release to 11.2.0.4
DB connectivity successful.
Table to store Technology Codelevel Checker results exists in the database.
STARTED Pre-req Patch Testing : Tue Apr  7 14:29:20 EDT 2015
Log file for this session : ./checkDBpatch_6400.log
Got the list of bug fixes to be applied and the ones to be rolled back.
Checking against the given ORACLE_HOME
Opatch is at the required version.
Found patch records in the inventory.
All the required one-offs are not present in the given ORACLE_HOME
List of missing bug fixes:
14046443
14255128
16299727
16359751
17250794
17401353
18260550
18282562
18331812
18331850
18440047
18485835
18604144
18689530
18730542
18828868
19291380
19393542
19472320
19487147
19791273
19896336
19949371
** Please refer to MOS Doc ID “1594274.1:Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes” to find the corresponding patch that delivers the bug fix. If an overlay patch is needed for any particular patch listed, the footnote for that patch also applies to the overlay patch.**
Stored Technology Codelevel Checker results in the database successfully.
Apply the missing bug fixes and run the script again to confirm.
FINISHED Pre-req Patch Testing : Tue Apr  7 14:29:24 EDT 2015
=========================================================

One lies, one tells the truth

I realized that the output of checkDBpatch.sh did not agree with 1594274.1.  In fact, it contained more patches than 1594274.1.  I would have assumed they were aligned, and this is really confusing as a customer: Which one is the definitive source?

In the end, I went with the checkDBpatch.sh, since it was a superset of the information contained in the Consolidated List of Patches and Technology Bug Fixes.  Perhaps a name change is in order?

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 42 other followers