Kevin Kempf's Blog

August 9, 2016

How to Migrate an Oracle database host in R12.2 using LVM

Filed under: Oracle, Oracle Linux — kkempf @ 9:40 am

Painting the backdrop

I realize this post is somewhat specific to your setup, but I believe it may hold value to some folks out there so I thought I’d formalize it.  In my case, I have a rather large (1TB+) database residing on Oracle Linux (RHEL) 5 which serves as the back end for my EBS 12.2 environment.  We do not use ASM nor RAC, but do use Linux LVM (logical volume manager) to make growing disks easier.  The disk itself is on a SAN in the data center, so in my case this process involves some assistance from the systems/network folks.

I’m moving from OL5 to OL6 because support for OL5 is running down.  Let’s face it: the database works great on OL5, and I have no compelling reason to migrate it, but this is part of our lives: upgrades for the sake of upgrading.

For the purposes of this post, I’ll simply use oldhost as the OL5 hostname, and newhost as the OL6 hostname for the database, and appshost as the hostname for the applications tier.

In the olden days

Oracle used to have a bonafide methodology for migrating the database tier, in 11i it’s spelled out under DOC 338003.1.  It made sense; it said to use the tech stack on the front end to tell the application tier that there’s a new host for the database.

The R12.2 solution

I opened an SR to get the equivalent document for 12.2 and the analyst basically said “Follow Doc ID 1968231.1 to use logical hostnames, you can sort of use autoconfig but we don’t support it”.  I don’t know if that’s really the best answer, but it was all I had to go with.  I had been planning to use DNS as a safety net, not the primary vehicle of changing the database hostname.  But apparently it’s the only vehicle now, despite the fact that I enter hostname on the front end in the context file.  While I think this is a terrible answer, that’s not the point of my post so I’ll let it go.

LVM Setup (oldhost)

In my environment, I set up my lvm volume groups like this:

Data1 - product datafiles for EBS
Data2 - more product datafiles for EBS
Archivelogs - archivelog destination
Redo - online redo
System - system/sysaux datafiles
RDBMS - Oracle database installation (binaries)
As you can imagine, the files contained in these groups are pretty large; well over a terabyte.  In an ideal situation, I’d take the database down cold and simply rsync the files to the new host in the same location, then crank everything up.  But that would take hours I don’t have, so I went a different route.  It’s worth noting, that on OL5, my disks on oldhost are ext3, and OL6 delivers ext4.  Since I’m moving the disks “as is”, I’m getting ext3 filesystems on newhost, the OL6 server.  It’s compatible, and something I just have to live with.
I feel like I should mention at this point that probably the very first step was to build newhost as an OL6 environment with appropriate cpu and memory.

LVM Prep (oldhost)

  • The first step is to shut everything down, obviously, front and back end on oldhost and appshost.
  • Next, I unmount all the volumes pertaining to the environment (in this case, dev).  For my environment, each of these corresponds to an LVM volume group
    • umount /u01/appdev
    • umount /u03/appdev
    • umount /u04/appdev
    • umount /u05/appdev
    • umount /u06/appdev
    • umount /usr/local/oracle/archive
  • Set all the volume groups to inactive
    • vgchange -an Data2
    • vgchange -an Data1
    • vgchange -an Archivelogs
    • vgchange -an Redo
    • vgchange -an System
    • vgchange -an RDBMS
  • Export the volume groups
    • vgexport Data2
    • vgexport Data1
    • vgexport Archivelogs
    • vgexport Redo
    • vgexport System
    • vgexport RDBMS

Disk manipulation

At this point, the volumes can be safely manipulated by the disk admins.  What this entails will vary greatly based on  your datacenter, and is outside the scope of linux so I’m not going to detail it here.  The gist of it is that your admin needs to remove the disks/virtual disks from the old host and install them on the new one by whatever means is appropriate.  I apologize in advance if I’m not saying this part correctly.

Adding the disks (newhost)

The first thing to try (assuming your new host received the disks “hot”) is simply

pvscan

They may just show up.  If not, you can try rescanning the scsi host:

ls /sys/class/scsi_host/ | whileread host ; do echo "- - -"> /sys/class/scsi_host/$host/scan ; done
-or this-
echo "- - -"> /sys/class/scsi_host/(host#, hit tab or guess)/scan
-for example this may turn into:

 echo “- – -” > /sys/class/scsi_host/host0/scan

echo “- – -” > /sys/class/scsi_host/host1/scan

echo “- – -” > /sys/class/scsi_host/host2/scan

Then issue
fdisk -l
and it’s worth noting you can watch /var/log/messages for the system to recognize new disks being added.  When all else fails, a reboot has never failed to get all disks recognized.
At this point, pvscan should show all your volume groups

Final steps (newhost)

It’s kind of the opposite of the prep steps:

  • Import the volume groups
    • vgimport Data2
    • vgimport Data1
    • vgimort Archivelogs
    • vgimport Redo
    • vgimport System
    • vgimport RDBMS
  • Active the volume groups
    • vgchange -ay Data2
    • vgchange -ay Data1
    • vgchange -ay Archivelogs
    • vgchange -ay Redo
    • vgchange -ay System
    • vgchange -ay RDBMS
  • Mount the disks (and put them in /etc/fstab so they survive a reboot!)
    • mount /dev/mapper/RDBMS-Dev /u01/appdev
    • mount /dev/mapper/System-Dev /u03/appdev
    • mount /dev/mapper/Redo-Dev /u04/appdev
    • mount /dev/mapper/Data1-Dev /u05/appdev
    • mount /dev/mapper/Data2-Dev /u06/appdev
    • mount /dev/mapper/Archivelogs-Logs /usr/local/oracle/archive

Final Steps

In my case, I had to do 3 final steps to comply with Oracle’s document and make everything work:

  • change /etc/hosts on appshost to explicitly refer to the newhost as oldhost… just in case
  • change listener.ora on newhost to be the new hostname on the database ($TNS_ADMIN)
  • add DNS record to change calls to old host to new one
  • Start up the database (newhost)

Parting Shots

This method took me about 30 minutes.  If the database were smaller, I’d simply rsync the ext3 filesytems to new disks built as ext4 on newhost.

 

 

May 25, 2016

When ADOP breaks because of the magic of checkfile

Filed under: Online Patching, Oracle, R12.2 — kkempf @ 3:57 pm

As usual, it’s been some time since I’ve thrown anything out here.  What’s been keeping me busy has been largely to do with ADOP.  I have a real love/hate relationship with this new addition to the EBS family, and lately it’s been more hate.

Oracle defines nocheckfile as an unsupported option which isn’t to be used unless explicitly stated in the readme for a patch.  Intriguing, wonder what that’s all about, why would it be there at all?  Let’s start from the beginning, defining default adop behavior and what checkfile does.  In theory, during the adop apply phase, the default option “checkfile” says “go see if I already know I have a more advanced or equal version of this code before I apply it”.  If it thinks it already has ingested this particular bit of code, don’t process it in an effort to save time during the patch.  Couple of key things here to note.  First, it’s trying to save me time.  Second, is it ever wrong about whether it’s already ingested an advanced version of the code?

First, let’s talk about saving time.  That’s a good thing.  If I only have to ingest 30% of a large patch, even using adop with the EBS available, that’s a good thing.  Now the second part.  It’s only good if it’s accurate.  100% accurate.  Because if it skips code I actually need, it turns out you wind up with a big, stinking mess to sort through.  There’s a little foreshadowing there.

Take this scenario, which you’ve probably guessed by now happened to me.  Some bug surfaced as a result of some aborted patch cycles which made my EBS adop cycle think it could safely skip pieces of a patch which it had thought were applied.  Only they weren’t, that patch cycle was aborted.

Here’s your first hint that you may be experiencing this: you get a warning during the apply phase that the patch already exists in your system, but you know that not to be the case.  At this point, Oracle thinks the patch is in, but you know it’s not.  So you innocently go on and use forceapply, per the adop “HINT”.  Now you also think the patch is in your system.  But it’s not.  Because of the magic of checkfile, and the fact that the patch was previously aborted and adop thinks it’s in there, you’re in a world of hurt now.

It’s a really insidious state to find yourself in, and the fix is to reapply the patches, using nocheckfile and forceapply.  This basically says “apply the patch despite the fact that you already think it’s installed, and force every bit of code into the system regardless of whether you think you already have it”.

I can tell you this: never run an adop abort without a full cleanup, as well as an fsclone to reset your patch environment.  That should keep you in good shape.  Officially Oracle now advertises this in Oracle E-Business Suite Maintenance Guide, Release 12.2 (Part Number E22954-20).  I strongly recommend reading that from cover to cover.  Because early on in 12.2, this wasn’t how adop abort was explained even by Oracle University.

March 10, 2016

My EBS R12.2 adop patch failed, now what?

Filed under: Oracle, R12.2 — kkempf @ 3:55 pm

adop fails a lot, especially when you’re building a regression test environment.  Remember the old DOS days when you yanked out the floppy disk but still had a: on the screen and DOS said (a)bort, (r)etry, (f)ail?  And abort and fail always seemed like the same thing, but somehow they weren’t?  Same thing with adop, only it calls them abandon and restart.

adop defaults to abandon=no restart=yes if not specified, but to me, that’s still a bit unclear.  For example, what the heck would abandon=yes restart=no do?  Here’s what I came up with

(1) abandon and cleanup (I strongly urge you to do a full cleanup, or you may be asking for trouble)

adop phase=abort,cleanup cleanup_mode=full
adop phase=fs_clone

(2) fix the problem via whatever means (adctrl in the patch environment, drop the index the patch is trying to re-create, etc) and retry from where it failed

adop phase=apply patches=17020683 restart=yes [abandon=no is implied and not required]

(3) fix what’s wrong but retry from the beginning of the patch, or try a new patch (why you would do the latter, I have no idea)

adop phase=apply patches=17020683 abandon=yes [restart=yes is implied and not required]   (same patch)
adop phase=apply patches=17893964,18497540 abandon=yes          (new patch)

 

December 9, 2015

FND_LOBS and Securefiles (Part II)

Filed under: 11g, 12c, R12, R12.2 — kkempf @ 3:03 pm

An Actionable Plan

Last post, I talked about how I was playing with securefiles, fnd_lobs, deduplication and advanced compression.  It took a bit of tweaking, but I think I can safely share my success at reigning in FND_LOBS.  I’m going to outline the process of an online conversion from FND_LOBS defined as a basicfile (the default) to securefiles.

Jumping off a Bridge

Ever see Oracle do a presentation and the first slide is legalese called a Safe Harbor which basically says everything you are about to see and hear is subject to change and no business plans should be made on assumptions or potential future releases?  I feel like I should make a similar warning, let’s call it a bridge jumper.  Remember the old saying, “If all your friends jumped off a bridge, would you do it too?”.  Here’s what I’ll say: This works in DEV on EBS version R12.2, and I’ve done some minimal functional testing.  Don’t do it in PROD until you prove that it doesn’t impact something I forgot about or can’t foresee.  I recommend a regression pass.  Also don’t use products you’re not licensed for, such as advanced compression, even in non-production environments.

References

http://docs.oracle.com/cd/B28359_01/appdev.111/b28393/adlob_smart.htm

http://www.oracle.com/technetwork/articles/sql/11g-securefiles-084075.html

Just the Highlights

You can convert your FND_LOBS table via online redefintion from basicfiles to securefiles.  In doing so, you will define the new lob table as securefile storage, and if you use deduplication and compression clauses, you will probably save a lot of space.  I did, dropping from 150GB to about 50GB.

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
SQL> /
OWNER                   TABLE_NAME              COLUMN_NAME                     SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ --------------------------------------------------
APPLSYS                FND_LOBS               FILE_DATA                      SYS_LOB0000034032C00004$$        149115.25

By the Numbers

I really wish WordPress would let me paste code in better, as I could make this a lot easier on the eyes.  Well you get what you get.

1. Increase the size of your APPS_TS_MEDIA tablespace until you can hold at least another table some percentage of the size of your FND_LOBs lob segment.  See my prior post to figure out how to determine this.

2. Create FND_LOBS_NEW (yeah, creative name right?) using the same syntax as your old table.  Note deduplicate, compress high clauses.

create
  table applsys.fnd_lobs_new
  (
    file_id          number
  ,file_name         varchar2( 256)
  ,file_content_type varchar2( 256) not null
  ,file_data blob
  ,upload_date     date
  ,expiration_date date
  ,program_name    varchar2( 32)
  ,program_tag     varchar2( 32)
  ,language        varchar2( 4) default( userenv( 'LANG') )
  ,oracle_charset  varchar2( 30) default( substr( userenv( 'LANGUAGE'),instr( userenv( 'LANGUAGE'),'.') + 1) )
  ,file_format     varchar2( 10) not null
  ,primary key( file_id ) validate
  )
  tablespace apps_ts_media pctfree 10initrans 10maxtrans 255storage
  (
    initial 128k buffer_pool default
  )
  logging nocompress lob
  (
    file_data
  )
  store as securefile
  (
    deduplicate compress high tablespace apps_ts_media chunk 8192storage( initial 128k buffer_pool default) pctversion 0nocache logging
  ) ;

3. Take your new table out of logging and enable parallel dml in an effort to save time:

  • alter table applsys.fnd_lobs_new nologging;
  • alter session force parallel dml parallel 4;
  • set timing on

4. Map the columns to the new table (this ran about 12 hours for me)

declare
  l_col_mapping varchar2(1000);
begin
  l_col_mapping :=
     'file_id file_id, '
   ||'file_name file_name, '
   ||'file_content_type file_content_type, '
   ||'file_data file_data, '
   ||'upload_date upload_date, '
   ||'expiration_date expiration_date, '
   ||'program_name program_name, '
   ||'program_tag program_tag, '
   ||'language language, '
   ||'oracle_charset oracle_charset, '
   ||'file_format file_format'
   ;
   dbms_redefinition.start_redef_table ('APPLSYS', 'FND_LOBS', 'FND_LOBS_NEW', l_col_mapping)
   ;
end;
/

5. Drop the existing index on APPLSYS.FND_LOBS_CTX (intermedia index). At this point, you may experience adverse performance on FND_LOBS for some users, but this index is of the insides of the LOBs as I understand it, so I doubt anyone will notice (your primary key remains available).  If you don’t do this, the next step fails spectacularly

drop index applsys.fnd_lobs_ctx;

6. Begin the redefinition process.  This ran really fast for me.

declare
   l_error_count pls_integer := 0;
begin
   dbms_redefinition.copy_table_dependents('APPLSYS', 'FND_LOBS', 'FND_LOBS_NEW', 1, TRUE, TRUE, TRUE, FALSE, l_error_count)
   ;
   dbms_output.put_line('Errors Occurred := '|| to_char(l_error_count))
   ;
end;
/

7. Finish the redefinition process.  Also very fast.

begin 
    dbms_redefinition.finish_redef_table
       ('ARUP', 'FND_LOBS', 'FND_LOBS_NEW');
end;
/

8. Confirm it worked

select securefile
from dba_lobs
where table_name = 'FND_LOBS'
/
SEC
---
YES 

9. Space check

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
OWNER                   TABLE_NAME              COLUMN_NAME             SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ ------------------------------ ------------------------------
APPLSYS                FND_LOBS               FILE_DATA              SYS_LOB0008283556C00004$$            46987

Elapsed: 00:00:11.64

10. Drop the temp table

drop table fnd_lobs_new;

11. Recreate the index APPLSYS.FND_LOBS_CTX from the EBS script.  This ran about 5 hours for me.

cd $FND_TOP/sql
sqlplus apps
SQL> @aflobbld.sql APPLSYS APPS

12. Gather statistics

EXEC FND_STATS.GATHER_TABLE_STATS(OWNNAME => ‘APPLSYS’,TABNAME => ‘FND_LOBS’);

Parting Shot

It occurs to me during cleanup, that if you create your “temporary” FND_LOBS table (in my case, FND_LOBS_NEW) in a new tablespace instead of apps_ts_media, once you’re done you could probably reclaim all the disk allocated for this one-time project. The alternative is that your apps_ts_media is now 3x (or more) what you need, as you eliminated your large, basicfiles FND_LOBS table, and now have a compressed and deduplicated version on securefiles which will presumably take a long, long time to fill up your apps_ts_media tablespace now that you’ve expanded it for the project. Because of high water marks, I was unable to shrink my tablespace in any significant way after I was done.

That’s It Folks

I tested this a few ways from the functional standpoint.  I ran a report which printed a LOB (Invoice) which was part of a requisition.  I opened an attachment in Quality.  I saved an attachment in Quality.  Everything checked out.  On to a full regression cycle!  Thanks for tuning in, let me know if you find this useful.

December 4, 2015

FND_LOBS and Securefiles (Part I)

Filed under: 11g, R12.2 — kkempf @ 11:05 am

FND_LOBS, how I hate thee

In our EBS environment, users have filled up FND_LOBS with attachments for the past 10 years.  From a functional standpoint, this is mostly quotes (.pdf’s) attached to purchase orders, and M$ Office documents attached to customer orders.  It grows without bounds, and in most cases there’s no expiration date included in the table or means to purge it with any reassurance that I’m not breaking something.  During a database 12c class last summer in Chicago, I learned some of the benefits of securefiles and it seemed like something to look into.  Then I was at a conference in Raleigh in September and it came up again.  Seeing a trend, I decided to start playing around with it (finally, yes, I know this is not new technology!).

Securefiles

Securefiles are like lobs 2.0 in Oracle.  It’s their attempt to better manage unstructured data within Oracle, and it’s a core database feature, having nothing directly to do with EBS.  The best document I found on it was here.  Here’s what caught my attention: it can leverage deduplication and advanced compression.  I didn’t see any easy way to see what % of my FND_LOBS content was duplicate, but let’s be honest, users don’t care if they upload the same thing 10 times, it’s not their problem.

Disclaimer

As usual, check with your sales rep before you do this stuff in production.  Some of what I’m doing requires a license for Advanced Compression on your core database, and that’s not cheap.

Sizing up the Problem

A quick query shows that the fnd_lobs table (really, it’s associated lob segment) is consuming 150 GB.  That may not seem staggering, except multiply it by all EBS instances, add dataguard, and account for increased backup times and sizes, and it gets my attention.  Also take into account that this one object accounts for ~15% of my database size.  It’s been on my radar for a long time.

select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
from dba_lobs a, dba_segments b
where a.segment_name = b.segment_name
and a.owner = b.owner
and a.table_name like ‘FND_LOBS%’
order by b.bytes/1024/1024 desc
/

APPLSYS    FND_LOBS    FILE_DATA    SYS_LOB0000034032C00004$$    149115.25
APPLSYS    FND_LOBS_DOCUMENT    BLOB_CONTENT    SYS_LOB0000034093C00007$$    25
APPLSYS    FND_LOBS_DOCUMENT    CONTENT    SYS_LOB0000034093C00008$$    0.125

A Quick and Dirty Test Case

I wanted to see what my space savings was going to be if I converted this table to securefiles.  Thus I added a little space to apps_ts_media (well, a lot!) and created a table called fnd_lobs_new to look just like fnd_lobs, except it used securefiles as it’s lob storage choice, and high compression.

create
  table applsys.fnd_lobs_new
 (file_id          number
 ,file_name         varchar2( 256)
 ,file_content_type varchar2( 256) not null
 ,file_data blob
 ,upload_date     date
 ,expiration_date date
 ,program_name    varchar2( 32)
 ,program_tag     varchar2( 32)
 ,language        varchar2( 4) default( userenv( 'LANG') )
 ,oracle_charset  varchar2( 30) default( substr( userenv( 'LANGUAGE'),instr( userenv( 'LANGUAGE'),'.') + 1) )
 ,file_format     varchar2( 10) not null
 ,primary key( file_id ) validate
 )
 tablespace apps_ts_media pctfree 10 initrans 10 maxtrans 255 storage
 (
 initial 128k buffer_pool default
 )
 logging nocompress lob
 (
 file_data
 )
 store as securefile
 (
 deduplicate compress high tablespace apps_ts_media chunk 8192 storage( initial 128k buffer_pool default) pctversion 0 nocache logging
 ) 
 ;
alter table applsys.fnd_lobs_new nologging;
insert into applsys.fnd_lobs_new select * from applsys.fnd_lobs;  (this takes a LONG time.  As in 12+ hours for me)
select a.owner, a.table_name, a.column_name, a.segment_name , b.bytes/1024/1024
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = b.owner
    and a.table_name like 'FND_LOBS%'
   order by b.bytes/1024/1024 desc
SQL> /
OWNER                   TABLE_NAME              COLUMN_NAME                     SEGMENT_NAME            B.BYTES/1024/1024
------------------------------ ------------------------------ -------------------------------------------------- ------------------------------ -----------------
APPLSYS                FND_LOBS               FILE_DATA                      SYS_LOB0000034032C00004$$        149115.25
APPLSYS                FND_LOBS_NEW              FILE_DATA                      SYS_LOB0008283485C00004$$        46989.875
APPLSYS                FND_LOBS_DOCUMENT          BLOB_CONTENT                     SYS_LOB0000034093C00007$$               25
APPLSYS                FND_LOBS_DOCUMENT          CONTENT                         SYS_LOB0000034093C00008$$             .125

Bottom Line

I reduced fnd_lobs from 150GB to 47GB.  That’s a win!

Next Steps

My test isn’t practical.  I simply wanted to see if there was enough gain to continue with an online conversion.  There is.  Next post I hope to show you exactly how to convert FND_LOBS to securefiles without any downtime.

October 29, 2015

Fun with adop

Filed under: 11g, R12.2 — kkempf @ 1:58 pm

Sphincter said what?

I was doing some core ETCC database patching against a 12.2 DEV instance today.  I was hoping to get to AD/TXK Delta 7 and see how that worked.  But after a big opatch session, when I restarted services, adop phase=prepare threw this error:

Validating system setup…
[ERROR]     Failed to execute SQL statement :
select AD_ZD_ADOP.GET_INVALID_NODES() from dual
[ERROR]     Error Message :
[ERROR]     Failed to execute SQL statement :
select AD_ZD_ADOP.GET_INVALID_NODES() from dual
[ERROR]     Error Message :
[UNEXPECTED]Nodes “-1” are listed in ADOP_VALID_NODES table but not in FND_NODES table.
[UNEXPECTED]To correct this, run AutoConfig on nodes “-1”
[UNEXPECTED]Error while checking if this is a multi node instance

MOS had a direct hit on this (1988581.1):  “Nodes “-1″ are listed in ADOP_VALID_NODES table but not in FND_NODES table.”, but it didn’t fix the issue.  In fact in accordance with this note I was fine.

The Fix

On the database tier, go to $ORACLE_HOME/appsutil/admin and re-run adgrants, then recompile the package:

sqlplus / as sysdba

@adgrants apps

alter package apps.AD_ZD_ADOP compile body;

After this, adop phase=prepare ran happily enough, and included an explicit autoconfig.  I can’t promise this will work for you, but since the error message is rather vague and unhelpful I thought I’d post this.

October 28, 2015

RDBMS 11g vs. 12c

Filed under: 11g, 12c — kkempf @ 10:12 am

A bit of history

Last June, I wrote up a short summary of support implications for Database 12c.  Given that it’s Open World week, I thought I’d see if there were any relevant announcements in regards to this.  I went back to Document 742060.1 and was delighted to see a change since I’d last looked.  In June of 2015 it read:

NOTE: The fee for the first year of Extended Support for 11.2 has been waived for all customers. The Premier Support timeline has been extended one year to show that customers do not need to pay the Extended Support fee to receive Error Correction Support for that first year.

Now, it reads:

NOTE: The fee for Extended Support for 11.2 has been waived for all customers through 31 May 2017. No customer action is required to take advantage of Extended Support during this time.” and “Extended Support fees waived until May 31, 2017. An ES service contract is required starting 1-Jun-2017.

 

Roadmap_October_2105

There’s no change log for the doc so I can’t tell when Oracle made this decision. Some of this appears to be semantics; instead of granting more “Premier support extensions” they’re delivering “Free Extended Support”.  While there is a subtle difference there, the bottom line is that if you’re on 11.2.0.4, you have a little breathing room to get to 12c!

Why does this matter?

For EBS customers like me, Release 12.2 wasn’t even certified with RDBMS 12.1.0.2 and Dataguard until August 2015.  This caused much angst for me, as I had two options: pay more to stay on 11.2 extended support, or rush into 12.1.0.2 and hope for the best in the 4 months Oracle gave me between the certification in August of 2015 and the January 2016 “end of free first year of extended support”.

Thanks, Oracle, for doing the right thing and extending the window for 11.2!

September 3, 2015

More fun with 12c: Online datafile moves

Filed under: 12c — kkempf @ 1:50 pm

A long time coming

I think I’ve been wondering why I couldn’t do this since 9i

SQL> select name, enabled from v$datafile where name like '%vertex%';
NAME                                        ENABLED
--------------------------------------------------------------------------- ----------
/u05/appdev/devdata/vertex01.dbf                        READ WRITE


SQL> alter database move datafile '/u05/appdev/devdata/vertex01.dbf' to '/u06/appdev/devdata/vertex01.dbf';

Database altered.

SQL> select name, enabled from v$datafile where name like '%vertex%';
NAME                                        ENABLED
--------------------------------------------------------------------------- ----------
/u06/appdev/devdata/vertex01.dbf                        READ WRITE


SQL>

Oracle RDBMS 12c table restore

Filed under: 12c, RMAN — kkempf @ 7:37 am

Database 12.1.0.2

So I’m playing with 12c in a development environment with EBS 12.2.3, trying to understand some of the new features.  I don’t use FRA, because as an EBS DBA, there’s not much logic in restoring only one table back, or (worse) flashing the whole ERP back.  That said, this new RMAN feature is pretty cool, and best of all, it requires no effort on my part.

Custom Table

Nothing special here, just a custom schema table (yes, I removed the schema name references from this post so as to “anonymize” it a bit).  I get a row count, the current SCN, and then truncate the table

SQL*Plus: Release 12.1.0.2.0 Production on Wed Sep 2 16:35:16 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Enter password:
 Last Successful login time: Wed Sep 02 2015 16:34:56 -04:00
Connected to:
 Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select count(*) from label_data;
COUNT(*)
 ----------
 232
SQL> column current_scn format 999999999999999
 SQL> select sysdate, current_scn from v$database;
SYSDATE        CURRENT_SCN
 --------- ----------------
 02-SEP-15    5982678044431
SQL> truncate table label_data;
Table truncated.

RMAN Restore/Recover of a Table

Now let's fix it with the new RMAN 12c feature, doing the work in /u05/appdev/restore and renaming the table to restored_label_data.  
The long and short of it is that RMAN does all the work for you.  It brings up a minimal instance, restores the table, then exports it 
into the restored table you specified, then wipes out the minimal instance.  It took awhile, about 30 minutes, but the main database 
was available the whole time.  I've removed some messages to save space, but you get the gist of it.  I suppose in some abstract and 
sampling manner, this also serves to test your RMAN backup to a degree.

RMAN> recover table label_data until scn 5982678044431 auxiliary destination '/u05/appdev/restore' remap table label_data:restored_label_data;
Starting recover at 02-SEP-15
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=425 device type=DISK
 RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time
List of tablespaces expected to have UNDO segments
 Tablespace SYSTEM
 Tablespace APPS_UNDOTS1
Creating automatic instance, with SID='degz'
initialization parameters used for automatic instance:
 db_name=DEV
 db_unique_name=degz_pitr_DEV
 compatible=12.1.0
 db_block_size=8192
 db_files=1000
 diagnostic_dest=/u01/appdev/oracle/devdb/12.1.0.2/log
 _system_trig_enabled=FALSE
 sga_target=2560M
 processes=200
 db_create_file_dest=/u05/appdev/restore
 p log_archive_dest_1='location=/u05/appdev/restore' #No auxiliary parameter file used
starting up automatic instance DEV
Oracle instance started
Total System Global Area    2684354560 bytes
Fixed Size                     3714440 bytes
 Variable Size                654312056 bytes
 Database Buffers            2013265920 bytes
 Redo Buffers                  13062144 bytes
 Automatic instance created
contents of Memory Script:
 {
 # set requested point in time
 set until  scn 5982678044431;
 # restore the controlfile
 restore clone controlfile;
# mount the controlfile
 sql clone 'alter database mount clone database';
# archive current online log
 sql 'alter system archive log current';
 }
 executing Memory Script
executing command: SET until clause
Starting restore at 02-SEP-15
 allocated channel: ORA_AUX_DISK_1
 channel ORA_AUX_DISK_1: SID=2 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
 channel ORA_AUX_DISK_1: restoring control file
 channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/c-3959101112-20150902-01
 channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/c-3959101112-20150902-01 tag=TAG20150902T012532
 channel ORA_AUX_DISK_1: restored backup piece 1
 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
 output file name=/u05/appdev/restore/DEV/controlfile/o1_mf_bygqo9w1_.ctl Finished restore at 02-SEP-15
sql statement: alter database mount clone database
sql statement: alter system archive log current
contents of Memory Script:
 {
 # set requested point in time
 set until  scn 5982678044431;
 # set destinations for recovery set and auxiliary set datafiles
 set newname for clone datafile  1 to new;
 set newname for clone datafile  2 to new;
 set newname for clone datafile  3 to new;
 set newname for clone datafile  4 to new;
 set newname for clone datafile  5 to new;
 set newname for clone datafile  295 to new;
 set newname for clone datafile  314 to new;
 set newname for clone datafile  8 to new;
 set newname for clone datafile  11 to new;
 set newname for clone datafile  288 to new;
 set newname for clone datafile  392 to new;
 set newname for clone datafile  20 to new;
 set newname for clone datafile  106 to new;
 set newname for clone tempfile  1 to new;
 set newname for clone tempfile  2 to new;
 set newname for clone tempfile  3 to new;
 set newname for clone tempfile  5 to new;
 # switch all tempfiles
 switch clone tempfile all;
 # restore the tablespaces in the recovery set and the auxiliary set
 restore clone datafile  1, 2, 3, 4, 5, 295, 314, 8, 11, 288, 392, 20, 106;
switch clone datafile all;
 }
 executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file 
renamed tempfile 2 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file 
renamed tempfile 3 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file 
renamed tempfile 5 to /u05/appdev/restore/o1_mf_temp_%u_.tmp in control file
Starting restore at 02-SEP-15
 using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
...
contents of Memory Script:
{
# set requested point in time
set until  scn 5982678044431;
# online the datafiles restored or switched
sql clone "alter database datafile  354 online";
# recover and open resetlogs
recover clone database tablespace  "DATA", "SYSTEM", "APPS_UNDOTS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  354 online

Starting recover at 02-SEP-15
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00354: /u05/appdev/restore/TQFR_PITR_DEV/datafile/o1_mf_data_bygtnmkg_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/6sqfujuo_1_1
channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/6sqfujuo_1_1 tag=BACKUP_DEV_INCR_083115010001
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00354: /u05/appdev/restore/TQFR_PITR_DEV/datafile/o1_mf_data_bygtnmkg_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/93qg197k_1_1
channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/93qg197k_1_1 tag=BACKUP_DEV_INCR_090115010003
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00354: /u05/appdev/restore/TQFR_PITR_DEV/datafile/o1_mf_data_bygtnmkg_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /usr/local/oracle/backup/RMAN/DEV/baqg3so8_1_1
channel ORA_AUX_DISK_1: piece handle=/usr/local/oracle/backup/RMAN/DEV/baqg3so8_1_1 tag=BACKUP_DEV_INCR_090215010002
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

archived log for thread 1 with sequence 4156 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4156.log
archived log for thread 1 with sequence 4157 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4157.log
archived log for thread 1 with sequence 4158 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4158.log
archived log for thread 1 with sequence 4159 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4159.log
archived log for thread 1 with sequence 4160 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4160.log
archived log for thread 1 with sequence 4161 is already on disk as file /usr/local/oracle/archive/DEV1880762303__4161.log
archived log file name=/usr/local/oracle/archive/DEV1880762303__4156.log thread=1 sequence=4156
archived log file name=/usr/local/oracle/archive/DEV1880762303__4157.log thread=1 sequence=4157
archived log file name=/usr/local/oracle/archive/DEV1880762303__4158.log thread=1 sequence=4158
archived log file name=/usr/local/oracle/archive/DEV1880762303__4159.log thread=1 sequence=4159
archived log file name=/usr/local/oracle/archive/DEV1880762303__4160.log thread=1 sequence=4160
archived log file name=/usr/local/oracle/archive/DEV1880762303__4161.log thread=1 sequence=4161
media recovery complete, elapsed time: 00:00:01
Finished recover at 02-SEP-15
...
ORA-06512: at "SYS.DBMS_METADATA", line 10261
   EXPDP> . . exported "LABEL_DATA"                    207.4 KB     232 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tqFr_jxbe" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tqFr_jxbe is:
   EXPDP>   /u05/appdev/restore/tspitr_tqFr_90139.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tqFr_jxbe" completed with 10 error(s) at Wed Sep 2 17:35:20 2015 elapsed 0 00:01:31
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tqFr_Bipd" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tqFr_Bipd":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "RESTORED_LABEL_DATA"           207.4 KB     232 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tqFr_Bipd" successfully completed at Wed Sep 2 17:36:40 2015 elapsed 0 00:00:48
Import completed

sqlplus apps@dev Enter password: 
Last Successful login time: Thu Sep 03 2015 08:15:09 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from RESTORED_LABEL_DATA;

  COUNT(*)
----------
       232

SQL>



August 26, 2015

Fixing Enterprise Manager 12c Timeouts

Filed under: Enterprise Manager — kkempf @ 11:08 am

EM Timeout

Old Habits Die Hard

I have a monitor devoted almost exclusively to the performance page on my old EM 11g, still monitoring my main E-Business Suite production instance.  I can see at a glance if the system is busy, has blocking locks, massive I/O, etc.  It just feels comfortable and gives me a quick status at a glance.

Enter Enterprise Manager 12c.  I installed 12.1.0.5 and have it monitoring everything but my most important environments at this point.  It took a bit of getting used to, but eventually I got the hang of it and worked out the handful of issues I had with it.  The biggest problem I had from a usability standpoint was that it keeps timing out on the performance home page I’m trying to keep running on my 3rd monitor.  There’s even a checkbox which says “Disable Timeout on Performance Home Page”, but apparently it doesn’t do anything.  And yes, I’m aware of the security issues and why it’s in place; in my case this is not a concern (physical security > application security).

If you scour My Oracle Support, you may come across Doc ID 1644004.1, which tells you how to increase the timeout.  What it doesn’t tell you is that if you set the value of oracle.adf.view.rich.poll.TIMEOUT to -1, you disable the timeout altogether.

Not Quite Done

Nothing is ever simple.  Apparently after making this change, EM defaults to a 45 minute value for oracle.sysman.eml.maxInactiveTime which results in this screen

EMTimeout2

So you need to do this also:

emctl set property -name oracle.sysman.eml.maxInactiveTime -value -1 -sysman_pwd [your sysman password]

emctl stop oms

emctl start oms

Older Posts »

Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 44 other followers