Kevin Kempf's Blog

December 9, 2015

FND_LOBS and Securefiles (Part II)

Filed under: 11g, 12c, R12, R12.2 — kkempf @ 3:03 pm

An Actionable Plan

Last post, I talked about how I was playing with securefiles, fnd_lobs, deduplication and advanced compression.  It took a bit of tweaking, but I think I can safely share my success at reigning in FND_LOBS.  I’m going to outline the process of an online conversion from FND_LOBS defined as a basicfile (the default) to securefiles.

Jumping off a Bridge

Ever see Oracle do a presentation and the first slide is legalese called a Safe Harbor which basically says everything you are about to see and hear is subject to change and no business plans should be made on assumptions or potential future releases?  I feel like I should make a similar warning, let’s call it a bridge jumper.  Remember the old saying, “If all your friends jumped off a bridge, would you do it too?”.  Here’s what I’ll say: This works in DEV on EBS version R12.2, and I’ve done some minimal functional testing.  Don’t do it in PROD until you prove that it doesn’t impact something I forgot about or can’t foresee.  I recommend a regression pass.  Also don’t use products you’re not licensed for, such as advanced compression, even in non-production environments.

References

http://docs.oracle.com/cd/B28359_01/appdev.111/b28393/adlob_smart.htm

http://www.oracle.com/technetwork/articles/sql/11g-securefiles-084075.html

Just the Highlights

You can convert your FND_LOBS table via online redefintion from basicfiles to securefiles.  In doing so, you will define the new lob table as securefile storage, and if you use deduplication and compression clauses, you will probably save a lot of space.  I did, dropping from 150GB to about 50GB.

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
SQL> /
OWNER                   TABLE_NAME              COLUMN_NAME                     SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ --------------------------------------------------
APPLSYS                FND_LOBS               FILE_DATA                      SYS_LOB0000034032C00004$$        149115.25

By the Numbers

I really wish WordPress would let me paste code in better, as I could make this a lot easier on the eyes.  Well you get what you get.

1. Increase the size of your APPS_TS_MEDIA tablespace until you can hold at least another table some percentage of the size of your FND_LOBs lob segment.  See my prior post to figure out how to determine this.

2. Create FND_LOBS_NEW (yeah, creative name right?) using the same syntax as your old table.  Note deduplicate, compress high clauses.

create
  table applsys.fnd_lobs_new
  (
    file_id          number
  ,file_name         varchar2( 256)
  ,file_content_type varchar2( 256) not null
  ,file_data blob
  ,upload_date     date
  ,expiration_date date
  ,program_name    varchar2( 32)
  ,program_tag     varchar2( 32)
  ,language        varchar2( 4) default( userenv( 'LANG') )
  ,oracle_charset  varchar2( 30) default( substr( userenv( 'LANGUAGE'),instr( userenv( 'LANGUAGE'),'.') + 1) )
  ,file_format     varchar2( 10) not null
  ,primary key( file_id ) validate
  )
  tablespace apps_ts_media pctfree 10initrans 10maxtrans 255storage
  (
    initial 128k buffer_pool default
  )
  logging nocompress lob
  (
    file_data
  )
  store as securefile
  (
    deduplicate compress high tablespace apps_ts_media chunk 8192storage( initial 128k buffer_pool default) pctversion 0nocache logging
  ) ;

3. Take your new table out of logging and enable parallel dml in an effort to save time:

  • alter table applsys.fnd_lobs_new nologging;
  • alter session force parallel dml parallel 4;
  • set timing on

4. Map the columns to the new table (this ran about 12 hours for me)

declare
  l_col_mapping varchar2(1000);
begin
  l_col_mapping :=
     'file_id file_id, '
   ||'file_name file_name, '
   ||'file_content_type file_content_type, '
   ||'file_data file_data, '
   ||'upload_date upload_date, '
   ||'expiration_date expiration_date, '
   ||'program_name program_name, '
   ||'program_tag program_tag, '
   ||'language language, '
   ||'oracle_charset oracle_charset, '
   ||'file_format file_format'
   ;
   dbms_redefinition.start_redef_table ('APPLSYS', 'FND_LOBS', 'FND_LOBS_NEW', l_col_mapping)
   ;
end;
/

5. Drop the existing index on APPLSYS.FND_LOBS_CTX (intermedia index). At this point, you may experience adverse performance on FND_LOBS for some users, but this index is of the insides of the LOBs as I understand it, so I doubt anyone will notice (your primary key remains available).  If you don’t do this, the next step fails spectacularly

drop index applsys.fnd_lobs_ctx;

6. Begin the redefinition process.  This ran really fast for me.

declare
   l_error_count pls_integer := 0;
begin
   dbms_redefinition.copy_table_dependents('APPLSYS', 'FND_LOBS', 'FND_LOBS_NEW', 1, TRUE, TRUE, TRUE, FALSE, l_error_count)
   ;
   dbms_output.put_line('Errors Occurred := '|| to_char(l_error_count))
   ;
end;
/

7. Finish the redefinition process.  Also very fast.

begin 
    dbms_redefinition.finish_redef_table
       ('ARUP', 'FND_LOBS', 'FND_LOBS_NEW');
end;
/

8. Confirm it worked

select securefile
from dba_lobs
where table_name = 'FND_LOBS'
/
SEC
---
YES 

9. Space check

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
OWNER                   TABLE_NAME              COLUMN_NAME             SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ ------------------------------ ------------------------------
APPLSYS                FND_LOBS               FILE_DATA              SYS_LOB0008283556C00004$$            46987

Elapsed: 00:00:11.64

10. Drop the temp table

drop table fnd_lobs_new;

11. Recreate the index APPLSYS.FND_LOBS_CTX from the EBS script.  This ran about 5 hours for me.

cd $FND_TOP/sql
sqlplus apps
SQL> @aflobbld.sql APPLSYS APPS

12. Gather statistics

EXEC FND_STATS.GATHER_TABLE_STATS(OWNNAME => ‘APPLSYS’,TABNAME => ‘FND_LOBS’);

Parting Shot

It occurs to me during cleanup, that if you create your “temporary” FND_LOBS table (in my case, FND_LOBS_NEW) in a new tablespace instead of apps_ts_media, once you’re done you could probably reclaim all the disk allocated for this one-time project. The alternative is that your apps_ts_media is now 3x (or more) what you need, as you eliminated your large, basicfiles FND_LOBS table, and now have a compressed and deduplicated version on securefiles which will presumably take a long, long time to fill up your apps_ts_media tablespace now that you’ve expanded it for the project. Because of high water marks, I was unable to shrink my tablespace in any significant way after I was done.

That’s It Folks

I tested this a few ways from the functional standpoint.  I ran a report which printed a LOB (Invoice) which was part of a requisition.  I opened an attachment in Quality.  I saved an attachment in Quality.  Everything checked out.  On to a full regression cycle!  Thanks for tuning in, let me know if you find this useful.

Advertisements

December 4, 2015

FND_LOBS and Securefiles (Part I)

Filed under: 11g, R12.2 — kkempf @ 11:05 am

FND_LOBS, how I hate thee

In our EBS environment, users have filled up FND_LOBS with attachments for the past 10 years.  From a functional standpoint, this is mostly quotes (.pdf’s) attached to purchase orders, and M$ Office documents attached to customer orders.  It grows without bounds, and in most cases there’s no expiration date included in the table or means to purge it with any reassurance that I’m not breaking something.  During a database 12c class last summer in Chicago, I learned some of the benefits of securefiles and it seemed like something to look into.  Then I was at a conference in Raleigh in September and it came up again.  Seeing a trend, I decided to start playing around with it (finally, yes, I know this is not new technology!).

Securefiles

Securefiles are like lobs 2.0 in Oracle.  It’s their attempt to better manage unstructured data within Oracle, and it’s a core database feature, having nothing directly to do with EBS.  The best document I found on it was here.  Here’s what caught my attention: it can leverage deduplication and advanced compression.  I didn’t see any easy way to see what % of my FND_LOBS content was duplicate, but let’s be honest, users don’t care if they upload the same thing 10 times, it’s not their problem.

Disclaimer

As usual, check with your sales rep before you do this stuff in production.  Some of what I’m doing requires a license for Advanced Compression on your core database, and that’s not cheap.

Sizing up the Problem

A quick query shows that the fnd_lobs table (really, it’s associated lob segment) is consuming 150 GB.  That may not seem staggering, except multiply it by all EBS instances, add dataguard, and account for increased backup times and sizes, and it gets my attention.  Also take into account that this one object accounts for ~15% of my database size.  It’s been on my radar for a long time.

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
from dba_lobs a, dba_segments b
where a.segment_name = b.segment_name
and a.owner = b.owner
and a.table_name like ‘FND_LOBS%’
order by b.bytes/1024/1024 desc
/

APPLSYS    FND_LOBS    FILE_DATA    SYS_LOB0000034032C00004$$    149115.25
APPLSYS    FND_LOBS_DOCUMENT    BLOB_CONTENT    SYS_LOB0000034093C00007$$    25
APPLSYS    FND_LOBS_DOCUMENT    CONTENT    SYS_LOB0000034093C00008$$    0.125

A Quick and Dirty Test Case

I wanted to see what my space savings was going to be if I converted this table to securefiles.  Thus I added a little space to apps_ts_media (well, a lot!) and created a table called fnd_lobs_new to look just like fnd_lobs, except it used securefiles as it’s lob storage choice, and high compression.

create
  table applsys.fnd_lobs_new
 (file_id          number
 ,file_name         varchar2( 256)
 ,file_content_type varchar2( 256) not null
 ,file_data blob
 ,upload_date     date
 ,expiration_date date
 ,program_name    varchar2( 32)
 ,program_tag     varchar2( 32)
 ,language        varchar2( 4) default( userenv( 'LANG') )
 ,oracle_charset  varchar2( 30) default( substr( userenv( 'LANGUAGE'),instr( userenv( 'LANGUAGE'),'.') + 1) )
 ,file_format     varchar2( 10) not null
 ,primary key( file_id ) validate
 )
 tablespace apps_ts_media pctfree 10 initrans 10 maxtrans 255 storage
 (
 initial 128k buffer_pool default
 )
 logging nocompress lob
 (
 file_data
 )
 store as securefile
 (
 deduplicate compress high tablespace apps_ts_media chunk 8192 storage( initial 128k buffer_pool default) pctversion 0 nocache logging
 ) 
 ;
alter table applsys.fnd_lobs_new nologging;
insert into applsys.fnd_lobs_new select * from applsys.fnd_lobs;  (this takes a LONG time.  As in 12+ hours for me)
select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
SQL> /
OWNER                   TABLE_NAME              COLUMN_NAME                     SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ -------------------------------------------------- ------------------------------ -----------------
APPLSYS                FND_LOBS               FILE_DATA                      SYS_LOB0000034032C00004$$        149115.25
APPLSYS                FND_LOBS_NEW              FILE_DATA                      SYS_LOB0008283485C00004$$        46989.875
APPLSYS                FND_LOBS_DOCUMENT          BLOB_CONTENT                     SYS_LOB0000034093C00007$$               25
APPLSYS                FND_LOBS_DOCUMENT          CONTENT                         SYS_LOB0000034093C00008$$             .125

Bottom Line

I reduced fnd_lobs from 150GB to 47GB.  That’s a win!

Next Steps

My test isn’t practical.  I simply wanted to see if there was enough gain to continue with an online conversion.  There is.  Next post I hope to show you exactly how to convert FND_LOBS to securefiles without any downtime.

October 29, 2015

Fun with adop

Filed under: 11g, R12.2 — kkempf @ 1:58 pm

Sphincter said what?

I was doing some core ETCC database patching against a 12.2 DEV instance today.  I was hoping to get to AD/TXK Delta 7 and see how that worked.  But after a big opatch session, when I restarted services, adop phase=prepare threw this error:

Validating system setup…
[ERROR]     Failed to execute SQL statement :
select AD_ZD_ADOP.GET_INVALID_NODES() from dual
[ERROR]     Error Message :
[ERROR]     Failed to execute SQL statement :
select AD_ZD_ADOP.GET_INVALID_NODES() from dual
[ERROR]     Error Message :
[UNEXPECTED]Nodes “-1” are listed in ADOP_VALID_NODES table but not in FND_NODES table.
[UNEXPECTED]To correct this, run AutoConfig on nodes “-1”
[UNEXPECTED]Error while checking if this is a multi node instance

MOS had a direct hit on this (1988581.1):  “Nodes “-1″ are listed in ADOP_VALID_NODES table but not in FND_NODES table.”, but it didn’t fix the issue.  In fact in accordance with this note I was fine.

The Fix

On the database tier, go to $ORACLE_HOME/appsutil/admin and re-run adgrants, then recompile the package:

sqlplus / as sysdba

@adgrants apps

alter package apps.AD_ZD_ADOP compile body;

After this, adop phase=prepare ran happily enough, and included an explicit autoconfig.  I can’t promise this will work for you, but since the error message is rather vague and unhelpful I thought I’d post this.

October 28, 2015

RDBMS 11g vs. 12c

Filed under: 11g, 12c — kkempf @ 10:12 am

A bit of history

Last June, I wrote up a short summary of support implications for Database 12c.  Given that it’s Open World week, I thought I’d see if there were any relevant announcements in regards to this.  I went back to Document 742060.1 and was delighted to see a change since I’d last looked.  In June of 2015 it read:

NOTE: The fee for the first year of Extended Support for 11.2 has been waived for all customers. The Premier Support timeline has been extended one year to show that customers do not need to pay the Extended Support fee to receive Error Correction Support for that first year.

Now, it reads:

NOTE: The fee for Extended Support for 11.2 has been waived for all customers through 31 May 2017. No customer action is required to take advantage of Extended Support during this time.” and “Extended Support fees waived until May 31, 2017. An ES service contract is required starting 1-Jun-2017.

 

Roadmap_October_2105

There’s no change log for the doc so I can’t tell when Oracle made this decision. Some of this appears to be semantics; instead of granting more “Premier support extensions” they’re delivering “Free Extended Support”.  While there is a subtle difference there, the bottom line is that if you’re on 11.2.0.4, you have a little breathing room to get to 12c!

Why does this matter?

For EBS customers like me, Release 12.2 wasn’t even certified with RDBMS 12.1.0.2 and Dataguard until August 2015.  This caused much angst for me, as I had two options: pay more to stay on 11.2 extended support, or rush into 12.1.0.2 and hope for the best in the 4 months Oracle gave me between the certification in August of 2015 and the January 2016 “end of free first year of extended support”.

Thanks, Oracle, for doing the right thing and extending the window for 11.2!

February 25, 2015

R12 Take Aways and Warnings, Part 3 (Java Clients, Dataguard with editioning, logs, crons and the Workflow Notification Mailer)

Filed under: 11g, Dataguard, R12.2 — kkempf @ 1:58 pm

warning

More R12.2 War Stories

I thought I’d go back to my R12.2 war stories today.  To be honest, I still have plenty to talk about even after this post, but a journey of a thousand miles begins with one step, right?

Java Clients

What a pain in the 4th point of contact.  We went into this upgrade knowing we wanted to deploy the most advanced Java client to the desktop (for forms) we could, so we didn’t have to deal with it.  Wow it’s a terrible hot mess.  We settled on 1.7.67.  Oracle, could we just call it 7.67?  Why do we need a 1 in front?  Sorry back to the topic at hand.  Java 7 is smart, and won’t run unsigned Jar files without much ado and many, many user clicks.  This is all good in the name of security, I suppose, but it sucks to administer.  At one point here I talked about the problems that arise.  Oracle tries to help mitigate this with a new product called Java Advanced Managment Console but it’s really new and really doesn’t seem to fully meet our needs.  So bottom line here: figure out how to get a java code signing certificate from your favorite CA (Thawte worked great for us) and what the new rules are for signing jar files.  You can search Enhanced Jar Signing on MOS, really you just need to understand adjkey to build the key, pick your CA, and use adjkey again to import the certificate.  Then you run adadmin (1,4) to regenerate jar files.

Dataguard

We utilize active dataguard with a physical standby against our 11.2.0.4 Linux x86_64 database.  You may think Dataguard is a curious topic for an R12 upgrade.  The truth is that the upgrade itself has little to do with dataguard; in fact I disabled it before starting the upgrade because a) I didn’t want to ship hundreds of gigabytes of data down the pipe and b) I had no faith it would actually work.  After I was done with the upgrade I rebuilt the dataguard environment using RMAN, and it crashed every week for 6 weeks.  There’s a few known issues out there, but the error manifested itself like this:

Sat Dec 13 14:17:30 2014
Errors in file /u01/appprod/oracle/proddb/diag/rdbms/proddg/PROD/trace/PROD_lgwr_24265.trc:
ORA-04021: timeout occurred while waiting to lock object
LGWR (ospid: 24265): terminating the instance due to error 4021
Sat Dec 13 14:17:30 2014
System state dump requested by (instance=1, osid=24265 (LGWR)), summary=[abnormal instance termination].
System State dumped to trace file /u01/appprod/oracle/proddb/diag/rdbms/proddg/PROD/trace/PROD_diag_24236_20141213141730.trc
Dumping diagnostic data in directory=[cdmp_20141213141730], requested by (instance=1, osid=24265 (LGWR)), summary=[abnormal instance termination].
Instance terminated by LGWR, pid = 24265

And now you have no dataguard!  After trying various patches (17588480, 19631234, and 16299727), it appears that the real problem was that dataguard didn’t handle editioning well and the answer was patch 16299727 and adding

_adg_parselock_timeout=500  # dataguard editioning fix
event=”16717701 trace name context forever, level 104887600″ # dataguard fix

to my pfile/spfile fixed the issue.

Logs

There’s lots of logs in R12.2.  Pretty safe, obvious statement, right?  Well you need to go figure out where they are, because they all moved in 12.2.  If you don’t, well you’re gonna have a mess or possibly run out of disk.  I cron’d up a bunch of find commands to get rid of them; obviously don’t paste blindly, you need to tweak the retentions to your needs.  Some of these are supposed to be covered by cleanup jobs, but I find this is a nice insurance policy:

# Remove old outputs $APPLCSF/out or /u02/appprod/fs_ne/inst/PROD_hostname/logs/appl/conc/out
0 8 * * 1 /usr/bin/find /u02/appprod/fs_ne/inst/PROD_hostname/logs/appl/conc/out -mtime +30 -type f -exec rm -rf {} \;
# Remove old reqs $APPLCSF/log or /u02/appprod/fs_ne/inst/PROD_hostname1/logs/appl/conc/log
0 8 * * 1 /usr/bin/find /u02/appprod/fs_ne/inst/PROD_hostname/logs/appl/conc/log -mtime +30 -type f -exec rm -rf {} \;
# Remove Weblogic oacore Server Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server1/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server1/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server2/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server2/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server3/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server3/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server4/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oacore_server4/logs -mtime +7 -type f -exec rm -rf {} \;
# Remove Weblogic Admin Server Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/AdminServer/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/AdminServer/logs -mtime +7 -type f -exec rm -rf {} \;
# Remove Weblogic Forms-c4ws Server Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms-c4ws_server1/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms-c4ws_server1/logs -mtime +7 -type f -exec rm -rf {} \;
# Remove Weblogic oafm Server Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oafm_server1/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/oafm_server1/logs -mtime +7 -type f -exec rm -rf {} \;
# Remove Weblogic forms Server Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server1/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server1/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server2/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server2/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server3/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server3/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server4/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/user_projects/domains/EBS_domain_PROD/servers/forms_server4/logs -mtime +7 -type f -exec rm -rf {} \;
# Remove Weblogic wlst Server Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/logs -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/logs -mtime +7 -type f -exec rm -rf {} \;
# Remove Weblogic OHS Logs
0 8 * * 1 /usr/bin/find /u02/appprod/fs1/FMW_Home/webtier/instances/EBS_web_PROD_OHS1/diagnostics/logs/OHS/EBS_web_PROD -mtime +7 -type f -exec rm -rf {} \;
0 8 * * 1 /usr/bin/find /u02/appprod/fs2/FMW_Home/webtier/instances/EBS_web_PROD_OHS1/diagnostics/logs/OHS/EBS_web_PROD -mtime +7 -type f -exec rm -rf {} \;

Crons

While we’re on the subject of cleaning up logs, let’s talk about crons.  If it weren’t for cron, my R12.2 environment would not run.  Period.  Besides cleaning up logs, I have crons to do things like

  • rotate MSCA ports (telnet server)
  • bounce MSCA (daily!  it’s horribly unstable)
  • monitor various R12 related things from the OS (for example, use curl in a shell script to ensure the front end login page is up)
  • bounce JVMs every week (really, you need to do this)
  • cleanup apps sessions which otherwise won’t timeout or die (more on this later, worst surprise of the upgrade)
  • preclone the apps tier

Workflow Notification Mailer

If you’re using this, especially if your mailer accepts IMAP/inbound email responses as a part of the workflow, make sure you understand how it looks in 12.2.  It does not survive the upgrade from 11i, and the screens are totally different.  As far as I can tell, they don’t require any new or additional information to work, they just changed around the configuration screens so as to confuse you.  Bottom line is this: it mostly works once you get it configured right, but you should be aware that there’s a patch 18842914 for the mailer which greatly improved its reliability.  I put that in because it kept crashing when some 3rd party server was spamming it with email bounces (yeah, I’m not going into that any further).

Coming Soon

I still haven’t covered the most annoying things… like adop, the techstack, custom tops, MSCA in general, report manger replacing the ADI desktop client, and having to manually kill sessions to keep the database from dying…

July 15, 2013

Before flashback DB was cool

Filed under: 11g, Oracle, RMAN — kkempf @ 3:50 pm

There was RMAN to flash your database back in time:

run
{
set until time “to_date(’07/04/13 04:00:00′,’mm/dd/yy hh24:mi:ss’)”;
allocate auxiliary channel d1 type disk;
allocate auxiliary channel d2 type disk;
restore database;
recover database;
alter database open resetlogs;
}

May 17, 2013

yum making RDBMS installs easier

Filed under: 11g, Linux — kkempf @ 12:36 pm

OL

For those of you tasked with a fresh install of an Oracle 11.2 RDBMS on linux, I stumbled across a real time saver.

Oracle has put together a package called oracle-rdbms-server-11gR2-preinstall.  As far as I can tell, the package itself does absolutely nothing after installation.  But the key is that it has, as prerequisites, all of the packages required for an RDBMS 11.2 install per the installation guide for Linux.  This means I don’t have to go manually yum update all the packages, they just get installed with this one package! In addition, it tweaks two annoying configuration files: sysctl.conf and limits.conf for required settings!

One simple command:

yum install oracle-rdbms-server-11gR2-preinstall

limits.conf:

# oracle-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024
oracle   soft   nofile    1024
# oracle-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536
oracle   hard   nofile    65536
# oracle-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047
oracle   soft   nproc    2047
# oracle-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384
oracle   hard   nproc    16384
# oracle-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KB
oracle   soft   stack    10240
# oracle-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KB
oracle   hard   stack    32768
@dba            soft    memlock         67108864
@dba            hard    memlock         67108864

sysctl.conf

# oracle-rdbms-server-11gR2-preinstall setting for fs.file-max is 6815744
fs.file-max = 6815744

# oracle-rdbms-server-11gR2-preinstall setting for kernel.sem is '250 32000 100 128'
kernel.sem = 250 32000 100 128

# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmni is 4096
kernel.shmmni = 4096

# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmall is 1073741824 on x86_64
# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmall is 2097152 on i386
kernel.shmall = 1073741824

# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmax is 4398046511104 on x86_64
# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmax is 4294967295 on i386
kernel.shmmax = 4398046511104

# oracle-rdbms-server-11gR2-preinstall setting for net.core.rmem_default is 262144
net.core.rmem_default = 262144

# oracle-rdbms-server-11gR2-preinstall setting for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304

# oracle-rdbms-server-11gR2-preinstall setting for net.core.wmem_default is 262144
net.core.wmem_default = 262144

# oracle-rdbms-server-11gR2-preinstall setting for net.core.wmem_max is 1048576
net.core.wmem_max = 1048576

# oracle-rdbms-server-11gR2-preinstall setting for fs.aio-max-nr is 1048576
fs.aio-max-nr = 1048576

# oracle-rdbms-server-11gR2-preinstall setting for net.ipv4.ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500

Here’s what it looks like on the command line:

s12

s13

I can’t take anything away from this.  I installed this package and the 11.2 RDBMS installer didn’t complain about anything being missing. It occurs to me that even if you’re using Red Hat Linux, you could add the Oracle Repo (they’re free now from what I understand) and still leverage this tool!

February 12, 2013

Installing the Oracle 11g Client on Ubuntu 12.1 64-bit

Filed under: 11g, Ubuntu — kkempf @ 2:01 pm

This pertains to who?

I suppose the audience for this post is pretty narrow; namely, Oracle DBA’s running Ubuntu as their desktop.  It’s basically a way to cram a square peg into a round hole and get the Oracle 11g (11.2.0.1) client running on Ubuntu.  I won’t go on another rant about why I don’t run Windows at work.  The beauty of it is, these days I don’t have to, have you seen Windows 8?  Ha, OK, let met get to it here.

My original post

I performed this experiment a few years back when I first went to Ubuntu as my desktop, and it was on version 10.  At some point since then, my walk through started to fail, so I was compelled to write it up again as I hacked up my OS to get it working.  Still, the original is more verbose, and probably worth a look.

Download the Installer

I grabbed V17532-1.zip from edelivery.oracle.com; this is the 11.2.0.1 64-bit linux client.  I think you could get it from OTN also.  Obviously, unzip this file somewhere convenient.

Get the required packages installed in Ubuntu

Basically, this means get all the libaio packages installed.  From software center, yours should look like this

software center installs libaio

Create symbolic links

You will need the following, run as root:

  • ln -s /usr/lib/x86_64-linux-gnu /usr/lib64
  • ln -s /lib/x86_64-linux-gnu/libgcc_s.so.1 /lib/libgcc_s.so (I created this before the install, since it was absent, and now it’s a file and not a link.  Not sure what to make of that, exactly)
  • ln -s /usr/bin/basename /bin/basename
  • ln -s /usr/bin/awk /bin/awk
  • I think the installer creates this one:  ln ­-s $ORACLE_HOME/lib/libagtsh.so.1.0 $ORACLE_HOME/lib/libagtsh.so

Screenshot from 2013-02-12 11:05:37 Screenshot from 2013-02-12 11:10:27goodone

Run the installer

runInstaller

The installer complains about the versions of all your packages; just hit ignore all, and force it to proceed.  Next, next, next just like Windows.

You will have to run 2 root.sh type scripts from the command line after the installer finishes.  There should be no errors whatsoever after the system check warnings.

Final Proof

proof

 

February 14, 2012

Switching from Redhat Linux to Oracle Linux in about 5,000 easy steps

Filed under: 11g, Oracle Linux — kkempf @ 4:15 pm

Wayback When

I remember being at Oracle Open World when Larry Ellison unveiled Oracle Enterprise Linux (OEL, which is now just Oracle Linux, or OL).    I think I even have a foam Oracle penguin and maybe even a t-shirt somewhere.  I was trying to understand why, as a loyal RedHat customer, I’d ever consider switching over to the “dark side”.  I even remember laughing to a colleague “Who in their right mind would want Oracle to support their Linux environment, they can’t even support their database?”.

Warming up to the idea

I thought it might be useful to note, for a moment, my experience level with Linux.  I’ve been using some flavor of Unix/Linux in various workplaces since 1999.  I hold an RHCSA (Red Hat Certified System Administrator) cert and I hope to pursue the next level, RHCE, within a few months.  Why am I mentioning this?  To demonstrate that I’m not an Oracle fan boy and if anything I have more of an inclination to run Red Hat than any other flavor of Linux at the moment.

As I mentioned in another post, we recently did a fairly major hardware refresh in our datacenter.  My 11i production database is currently a physical machine, and the box was available to me to tinker with prior to migration.  We’d even gotten to the point where we’d installed Red Hat Enterprise Linux (RHEL) 5.7 before I got the crazy idea to take a 2nd look at Oracle Linux.  I have no idea what the adoption rate is for OL, but they claim 8,000 customers on their information page.  I don’t know if that’s a lot or not.

What initially drove me to even consider Oracle Linux was not cost, but rather a series of really bad support tickets I had with Red Hat.  Unrelated, system service requests where Red Hat support went 0 for 3.  Why did my system lock up and the kernel panic?  Redhat: No idea. Twice.  The second time with really good logging enabled.   Then I had an issue where the system CPU (as opposed to user CPU) time was crazy high – in some sar reports as much as 20% of the total CPU usage.  Why is it so high?  Redhat: No idea.  At this point the little light bulb went off in my head.  I can pay less than half as much for bad support from Oracle.  And that’s really a pessimistic view.  In truth, there are some actual advantages to running Oracle on Oracle Linux, especially when you consider the Oracle Unbreakable Enterprise Kernel (UEK).

The Flavors of Oracle Linux

You can go, right now and pull Oracle Linux and install it on you machine.  You will, of course not be able to open a ticket (through my Oracle Support).  Basically, there’s 3 flavors of support with different cost levels:

  • Network Support: patches and updates only (this flavor was not offered by sales, not sure about this level)
  • Basic Support: add “complete Linux server lifecycle management, cluster software” (this is what we have)
  • Premier Support: add ksplice

I think I should take a minute to define a few things more clearly, as I wish my sales team would have:

  • Unbreakable Enterprise Kernel (UEK): Oracle’s home-grown kernel, available at any level of support.  From what I can tell, this kernel especially optimizes what Oracle perceives as deficiencies in the Red Hat kernel’s ability to handle big multi-processor (SMP) machines.
  • ksplice: a zero-downtime kernel patcher.  This is available only if you buy Premier Support and only if you are running the Oracle Unbreakable Enterprise Kernel (UEK)
  • Red Hat compatible kernel: the kernel Oracle creates based upon the same open-source feed Red Hat gets.  You can run Oracle Linux using either the UEK or the RH Compatible kernel, selected at boot time via grub
  • The Physical Address Extension (PAE) 32-bit kernel is available for Oracle Linux 5 (x86).  This was important to me because I run the 11i front end (has to be 32-bit!) on the PAE kernel which gets around the 4gb RAM limit imposed by 32-bit architecture.

Things I wish I’d known (or researched better) in retrospect

Before I dig into the “how” part of how to convert your Red Hat 5 machine to Oracle Linux, I thought I’d tell you more about the areas in which I have buyer’s remorse.

  • UEK.  This was one of the big upsells for me to Oracle Linux.  I planned to cut over to the Red Hat compatible kernel, then begin regression testing the “super” UEK kernel.  Except for one thing.  We’re a VMWare shop.  It’s not certified to run as a VMWare guest.  Until that changes, UEK is DOA in our datacenter.  If UEK is DOA, then so is ksplice.  If ksplice is DOA, then that means I definitely don’t need premier support.
  • Unbreakable Linux Network (ULN) pales in comparison to the Redhat Network.  You cannot release patches for update to the servers like you can with the Red Hat network.  Basically you can see what systems are registered, if they need updates, and what your CSI is.  Beyond that, you can see what versions of what packages are available in what release.
  • Managing servers is rather manual.  There is some promise that Enterprise Manager may make server management easier, but I can’t verify this.

Switching to Oracle Linux

I’m assuming at this point, that you’re running RHEL 5.7.  While certainly you could be running something else, you’d have to be smart enough to make some changes in the steps I will outline.   RH6 isn’t certified with much, as far as I can tell.

This outline is derived from a lot of disparate sources.  In fact, I wish Oracle would have one, good, thorough document to walk me through all this, but they don’t appear to.

  • Go get the appropriate up2date packages from Oracle, based on the RH release you’re running, and land the RPMs somewhere on your server.
  • Install the up2date RPMs
    • rpm -Uvh up2date*rpm
    • Import the GPG Key for RPMs
      • rpm –import /usr/share/rhn/RPM-GPG-KEY
    • Save yourself a headache, update the RHN uuid of your server
      • # /usr/bin/uuidgen -r
        • e949dadc-c182-3ec2-9de7-44ag8a0d2bea
        • vi /etc/sysconfig/rhn/up2date-uuid
          • confirm the line rhnuuid matches the key you just generated
          • if not, pound it out and replace it:
            • rhnuuid=e949dadc-c182-3ec2-9de7-44ag8a0d2bea
    • Run the Oracle Registration TUI (Text User Interface)
      • up2date-nox – – register     (this actually launches the TUI) ** for OL6, it’s bundled now just run uln_register
      • If you have a proxy server between you and the internet
        • export http_proxy=http://username:password@yourproxy.domain.com:<your port>
        • edit /etc/yum.conf for future ease
          • under [main] add this line
          • proxy=http://username:password@yourproxy.domain.com:<your port>
          • Import the GPG Key for RPMs
        • Here’s screenshots from the TUI

      It always seems to say this...

      Hit next to waive all your rights

      Not sure I like hard coding my ULN login

      Hardware gathered information

      RPMs

  • Update your Repos and packages
    • cd /etc/yum.repos.d
    • rename your existing .repo files
      • mv rhel-source.repo rhel-source.repo.old
      • mv rhel-debuginfo.repo rhel-debuginfo.repo.old
      • note that the .repo suffix is what signifies that a repo file is “active”
    • use wget to fetch the oracle repos
    • edit the public-yum-el5.repo to activate (enable) the appropriate subscription channels
      • [ol5_u7_base]
      • enabled=1
    • update your packages
      • # up2date -i yum-rhn-plugin
      • # yum update
    • #yum install kernel
    • #yum install kernel-uek (optional, install the unbreakable kernel)
    • #yum install oracle-linux (optional Oracle packages, pair with UEK I believe)
  • reboot to new kernel
    • check /etc/grub.conf to ensure it is to your satisfaction
    • #reboot -i to restart the host, select the appropriate kernel when the grub menu option appears

First Impressions

So what I failed to mention in my original post were impressions about the migration.  We’ve been running Oracle Linux 5.7 for 3 weeks now.   Aside from the branding/logo changes  (A penguin in Oracle armor instead of the Red Hat shadow man) I don’t see much difference at all.  Not that I’d really have any reason to expect to.

Since this OS upgrade coincided with a  hardware upgrade, I feel it would be unfair to speculate on performance improvement.  Meaning, I suspect anything works better on the latest CPU and hardware architecture.  Suffice it to say, after a short period of fretting about the new OS, I just don’t monitor it anymore.  It works fine.

My concern about how much more manual the Oracle Linux experience hasn’t changed much.  I now understand that I could create a local RPM mirror, which would be updated daily through Oracle Enterprise Manger 11g, and thus through Enterprise Manager, I could push or release updates to my Oracle Linux servers.  That’s all fine and good, but that’s just one more layer of complication I’d rather not have to deal with.  I may go that route eventually, but since I’m comparing apples to apples, I simply say:  “I don’t have to do that with Red Hat”.

One impression I got throughout the whole conversion process was a general disjointedness from Oracle.  One of the main reasons I posted this blog entry was because the whole process, from information (sales) to install is all over the place.  I mean literally, all over the place.  The steps above are provided in a complete manner nowhere that I am aware of.  There’s a document here about how to wget the repo’s, a document there about how to deal with duplicate RHN UUID’s, another page to download the up2date RPMs, another page telling you how to register with the Oracle Linux network.  I suppose I can’t expect Oracle to advertise that the UEK kernel isn’t certified for VMWare, but it’s important, and I’d rather have learned that up front than on my own trying to boot a DEV VM into UEK.   Support told me to use up2date to update my packages, but when I put that in my (original) blog posting here, I was corrected in the comments and told of yum-rhn-plugin (noted in the steps above).   I looked back to see what Open World I was at when Oracle Linux was announced:  it was Fall of 2006.  I would think that after 5 years, this would be a bit more refined.

All that said, my final word on this is that while frustrating and non-intuitive, the migration to Oracle Linux has been fine.  It’s too early to say I recommend it, but I will say at this point, that I’m not regretting moving to it, and I think it’s worth a look.  Even if you’re from the old-school Sun/RedHat/AIX/HPUX environments like me.

Annoying Agent Problems

Filed under: 11g, Enterprise Manager — kkempf @ 9:30 am

New PROD RDBMS host

We run the main ERP database on a physical machine; I’d love to virtualize, and probably will oneday soon, but we couldn’t get to vSphere 5 (required because of CPU count) before the hardware refresh.  So we migrated Oracle to a brand new spiffy Dell R610 and it’s smokin’ fast.  The process was what is known as physical to physical (P to P) server migration, and it went as well as can be expected.  There was a bit of LVM manipulation required at the OS level, but for the most part we managed to bumble our way through it.

In the process of migrating to the new physical machine (from a rather reliable but ancient IBM blade server, incidentally), I took the plunge and cut over our production database hosts from RedHat 5.7 to Oracle Linux 5.7.  I say take the plunge, but in truth the risk was a known entity: it’s a RedHat compatible kernel.  What sparked this decision was 2 miserable, unresponsive tickets with RedHat support about high system CPU on my application server.  Not to be funny about it, but if I can pay about half as much to get bad support, perhaps better, from Oracle, why wouldn’t I?  Incidentally, the process of migrating from RH5 to OL5 (formerly OEL5, know they just call it OL5) is something which I will put in a detailed post shortly.

Angry Agents

After bringing up the database on new hardware, the agent would not communicate with the OMS:

The Oracle Management Server (OMS) has blocked this agent because it has either been reinstalled or restored from a filesystem backup.  Please click on the Agent Resynchronization button to resync the agent.

Your agent is hopelessly confused

When I “clicked on the agent resynchronization button to resync the agent” if failed with an error.   You can bet your last dime, however, the first thing Oracle Support asked in my ticket?  “Did you try clicking the agent resynchronization button?”.    This is the subsequent message (see below as well):

Agent Operation completed with errors.  For those targets that could not be saved, please go to the target’s monitoring configuration page to save them.  All other targets have been saved successfully.  Agent has not been unblocked.

Error communicating with agent.  Exception message – oracle.sysman.emSDK.emd.comm.CommException: IOException in reading Response :: Connection reset

Your agent has double crossed you

Blocked Agents

If there’s one thing I hate, it’s blocked agents.  You bet I tried to unblock it, then resync it.  I tried command line updates like emctl status agent, emctl upload agent, emctl unsecure agent, emctl secure agent.  You name it.  Nada.

The Fix

I stumbled across Document ID 1307816.1 while my analyst was busy asking me things like “can you upload your log files”.  In the end, as the sysman user, I ran this against your EM database:

exec mgmt_admin.cleanup_agent(‘problemhost.domainname.com:3872’);

After that my agent was happy, could talk to the OMS, and life was good.

Older Posts »

Blog at WordPress.com.