Kevin Kempf's Blog

July 29, 2009

Automatic Memory Management (AMM) … now available in (very) select markets

Filed under: 11g, Oracle — Tags: , — kkempf @ 9:50 am

When I look at memory advisors against an 11g database in EM 10.2.0.5, I always see this message saying I’m not using Automatic Memory Management (AMM). I do have Automatic Shared Memory Management (ASMM) enabled, and it works fine as far as I can tell. I was curious about the difference (AMM is a new 11g feature) so I did a little digging. Effectively, AMM is just ASMM + PGA management as well. So set the pga_aggregate_target=0, sga_target=0, and a new parameter called memory_target=24G, or whatever you want the maximum Oracle footprint to be for the given instance.

As an brief aside, there is also a memory_max_target which is akin to sga_max_size; it defines the upper boundary of the instance’s memory allowable. If not specifically defined, this defaults to the memory_target value. Just as with sga_max_size, I don’t get it (enlighten me!). Why would you ever allocate the memory up front with memory_max_target, or sga_max_size, and not use all of it (meaning sga_target is less than sga_max_size or memory_target is less than memory_max_target)? It sounds great; room to grow up to your max if required… but with all of the memory management taking place, I fail to understand why you would reserve memory you’re not using. Regardless, I digress, that’s not what this topic is about.

Fiddling with a non-production system, I changed the init parameters and started the instance:

ORA-00845: MEMORY_TARGET not supported on this system

What? A quick search on Metalink (doc id 749851.1) told me “The use of AMM is absolutely incompatible with HugePages.”

Alright, so in our case (and probably the case of anyone running 11i with a large SGA) we use HugePages. Our Intel x86 chip default “small page” is 4k, and Red Hat 5 makes huge pages 2M by default. We haven’t tried running a “large” SGA with “small pages” as would be required for AMM, because traditionally the CPU overhead to map 4k pages vs. 2mb pages was noticeable (512x as many pages to track, by my math, so for a 16GB SGA and 4k “small pages”, we’re talking about 8,192 huge pages, versus 4,194,304 “small pages”). If I have to choose between huge pages and AMM, it feels like huge pages is a priority.

My only complaint here is that I don’t think we’re doing anything that unusual. Red Hat Linux 5 64-bit OS with 64-bit Oracle, high concurrency, and a large(ish?) SGA of 24GB. Seems like that has to be one of the more common platforms Oracle is installed on these days. Conversely, who can benefit from AMM? Data warehouses? Tiny databases running on Windows?

Advertisements

July 28, 2009

Blackberries & white rice

Filed under: Uncategorized — Tags: — kkempf @ 1:44 pm

blackberry 8830
I have a Blackberry 8830 from work. All in all, it serves its purpose well, and, although I generally despise talking on the cell phone, it goes with the job. Over the 4th of July weekend, I had an unexpected capsize (does one ever expect one?) on Lake Murray, and after some time in the water realized the device was in my pocket. All told, it was probably submersed for a minute or less, and this is freshwater. Saltwater submersion, from what I’ve read, will eventually corrode the innards of this device, whereas with freshwater, you at least have a chance.

I googled around to get opinions on how to salvage the device and confirmed a suggestion from the host of the party I was at prior to my aquatic incident. I removed the battery, and sealed the phone in a ziplock contrainer filled with white rice. The idea is that the rice will absorb the moisture inside the phone, so I left it in there for a day and a half. When I took it out… it worked again, for the most part: the LED in the top right is stuck “on” in red, and the speaker is quieter than ever. Not saying this will work in every situation, just vouching that it can work…

July 27, 2009

Pulling the plug on 11i Advanced Data Compression

Filed under: 11g, advanced compression, Bugs, Dataguard — kkempf @ 10:49 am

This past weekend I uncompressed all my tables, rebuilt the associated indexes, and said adieu to Advanced Compression. Based on my (still open) Dataguard block corruption SR, the admin at work likes to call it Advanced Data Corruption. Regardless, I may have lived with the first issue as a beta tester, but after a second ORA-0600 related to the product cropped up last week, I gave up. When support admits that they have no documentation of the error internally, and begins suggesting the most inane course of action you can come up with, it’s time to give it the boot. It is my opinion that while this product works in the simplest environment, on the whole it is half-baked, not well tested, and not worth being a “guinea pig” or “production beta tester” for it.

Incidentally, the syntax to uncompress (and move, recreating the object completely uncompressed) is:

alter table BOM.MTL_MATERIAL_TRANSACTIONS move nocompress;

I have to say I really wanted this product to work. In principle, it’s a great idea, and in practice, it seems close to achieving what I would say was success:

  • Disk sat idle most of the day because everything was in the buffer cache
  • Most objects compressed extremely well, and total disk consumption fell significantly
  • With the exception of Dataguard (and the unknown OLTP ora-0600), it operated seamlessly with the apps

In the end, however, I’d tell anyone considering it to wait.  It’s not there yet.

*edit* Dataguard has been up and running just fine, for over 2 weeks now since I uncompressed all the database objects. Oracle development must be very close to what they think is a fix, based on my SR/bug. But I’m not putting compression back in anytime soon.

July 22, 2009

Advanced Compression … not ready for prime time?

Filed under: 11g, advanced compression, Oracle — Tags: — kkempf @ 7:50 pm

First, a little background. We have a highly customized bolt-on application in 11i for automated data collection (barcode scanning) and some label generation which we uniformly despise as an IT group. Functionally, it does the job well enough, and makes the job on the floor more accurate, faster, and in truth probably a bit easier for the manufacturing workload. We despise it from an IT perspective because it’s buggy, runs on a Windows (read: inherently unreliable) application server, and appears to be built on some combination of DOS, VB, and dot net framework. It requires 2 additional Oracle databases to run (the ERP, somehow, manages to run on only one…) and if I had to guess has caused 95+% of our unplanned downtime in the last 2 years. Our favorite part is that it has a sort of ad-hoc query component which can cause some of the worst explain plans I’ve ever seen. It was no surprise, then, that today it caused me more grief.

EM started showing high “concurrency” contention that tied to an object in our custom schema which clearly made this bolt-on the culprit. More technically, this was buffer busy waits, and I couldn’t get a good grasp on what had changed. As per standard operating procedure, I restarted the application server (which often fixes problems since it’s so unstable) but this failed to shake the issue. I then bounced the Windows application server it runs on, but this, too, to no avail. It was churning through DML transactions with many blocking sessions and buffer waits.

When I finally got a look at the alert log, I see this entry (note: this is not a typo, it really says “Compressionion“):

ORA-00600: internal error code, arguments: [OLTP Compressionion Block Check], [5], [], [], [], [], [], [], [], [], [], []

It spit out a trace file which was effectively unreadable to me (hex and block dumps, no human readable text), and tkprof showed nothing. On a hunch, I uncompressed the table at the center of this issue. For now, this seemed to have resolved the issue. I don’t think I’d describe the nature of the updates to this table as OLTP, but, to be fair, it is a reasonably volatile table which was compressed, and I thought this may be the issue. Despite myself, I opened an SR with the alert log, and trace file, just to see what Oracle thought. No word yet, but when I searched Metalink/My Oracle Support for the term “OLTP Compressionion Block Check” or, what I thought was the correct error, “OLTP Compression Block Check” I had 0 hits. None. Guess we’re on the bleeding edge of Advanced Compression. Seriously, what the heck is a Compressionion? It’s almost laughable, are we so far on the cutting edge even the error messages/code blocks haven’t been proofread?

July 21, 2009

LogMiner & Advanced Compression

Filed under: 11g, advanced compression, LogMiner, Oracle — Tags: , — kkempf @ 8:06 pm

Because my code fix from Oracle doesn’t seem imminent, I decided to take a look at the offending archivelog from LogMiner’s perspective today, and see if I could glean any useful knowledge about what it was trying to do when it killed the Dataguard apply service.  If nothing else, this serves as a good demo of how to see the contents of one archivelog in any environment; it’s not something I’ve used since RDBMS 8i days, and the Oracle documentation is pretty disorganized (see here).

Here are the “knowns” going into this experiment, which were easily obtained by looking at the alert log on my standby:

  1. Archive log sequence# 23676 causes Dataguard apply to crash
  2. The segment in question is BOM.CST_ITEM_COST_DETAILS

I started by restoring the “offending” archivelog, then invoking LogMiner and adding it to the current LogMiner worklist:

exec dbms_logmnr.add_logfile(logfilename => ‘/usr/tmp/PROD00010681000855__23676.arc’, options => dbms_logmnr.new);

Next I started LogMiner, using the existing dictionary from the online catalog (I tried a few other ways, such as a flat file export, but this was easiest):

exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog);

Now I wanted to see what operation was going on at the time of the block corruption:

select
  seg_owner
 ,seg_name
 ,operation
 ,sql_redo
 ,sql_undo
from
  v$logmnr_contents
where
  seg_owner = 'BOM'
and
  seg_name = 'CST_ITEM_COST_DETAILS'
;

SEG_OWNER  SEG_NAME                  OPERATION       SQL_REDO             SQL_UNDO
---------- ------------------------- --------------- -------------------- --------------------
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
BOM        CST_ITEM_COST_DETAILS     UNSUPPORTED     Unsupported          Unsupported
...
827 rows selected.

Yep, totally useless, 100% “Unsupported” value.   It had been awhile since I’d used LogMiner, so I wanted to be sure I was doing it right; I took a (short) but otherwise random SCN and received expected results:

select
 seg_owner
 ,scn
 ,seg_name
 ,operation
 ,sql_redo
 ,sql_undo
from
 v$logmnr_contents
where
 sql_redo != 'Unsupported'
and
 scn = 5975016494077
;

SEG_OWNER             SCN SEG_NAME    OPERATION       SQL_REDO             SQL_UNDO
---------- -------------- ----------- --------------- -------------------- --------------------
APPLSYS     5975016494077 FND_LOGINS  INSERT          insert into "APPLSYS delete from "APPLSYS
                                                      "."FND_LOGINS"("LOGI "."FND_LOGINS" where
                                                      N_ID","USER_ID","STA  "LOGIN_ID" IS NULL
                                                      RT_TIME","END_TIME", and "USER_ID" IS NUL
                                                      "PID","SPID","TERMIN L and "START_TIME" I
                                                      AL_ID","LOGIN_NAME", S NULL and "END_TIME
                                                      "SESSION_NUMBER","SU " IS NULL and "PID"
                                                      BMITTED_LOGIN_ID","S IS NULL and "SPID" I
                                                      ERIAL#","PROCESS_SPI S NULL and "TERMINAL
                                                      D","LOGIN_TYPE","SEC _ID" IS NULL and "LO
                                                      URITY_GROUP_ID") val GIN_NAME" IS NULL an
                                                      ues (NULL,NULL,NULL, d "SESSION_NUMBER" I
                                                      NULL,NULL,NULL,NULL, S NULL and "SUBMITTE
                                                      NULL,NULL,NULL,NULL, D_LOGIN_ID" IS NULL
                                                      NULL,NULL,NULL);     and "SERIAL#" IS NUL
                                                                           L and "PROCESS_SPID"
                                                                           IS NULL and "LOGIN_
                                                                           TYPE" IS NULL and "S
                                                                           ECURITY_GROUP_ID" IS
                                                                           NULL and ROWID = 'A
                                                                           AElDrAAPAAAJaQACw';

At this point, I wasn’t sure what the relationship was between Unsupported and object types; a little digging around on Metalink and Doc 282994.1 makes it clear that certain actions and certain object types threw this message, but none of them were specific to 11g RDBMS. This in addition to the fact that the document was ancient made me keep looking.

I had a hunch that compressed tables were the cuplrit, and that they simply couldn’t display DDL information because of compression. However when I did a little more querying I found that a compressed table usually shows Unsupported under Redo and Undo DDL, but not necessarily always. By contrast, uncompressed objects would usually contain DDL but sometimes also show Unsupported in the Redo and Undo DDL columns of the view. I’m going to see if there’s a tie to the action type tomorrow, and I’ll update the post.

*edit* “Official” word from support is as so:

LogMiner does not support these datatypes and table storage attributes:
– BFILE datatype
– Simple and nested abstract datatypes (ADTs)
– Collections (nested tables and VARRAYs)
– Object refs
– Tables using table compression
– SecureFiles

This fails to explain why I had some advanced compression objects which did have visible redo/undo, but I grow tired of dealing with support.

July 20, 2009

Advanced Compression in 11i, by the numbers

Filed under: 11g, advanced compression, Oracle — kkempf @ 2:30 pm

Aside from the obvious “it doesn’t play nice with Dataguard” issue I’m having with 11i/11g Advanced Compression, I thought I’d throw out some hard numbers to show just how much this technology is buying me so far. I’ve not compressed everything, but have been steadily running compresses as time permits in maintenance windows. My first priority was to compress the items in my buffer cache; so I created a table holding the “pre-compression” state of everything.   I then add information to this table as I compress tables.  Extracting data from this statistics table, here’s my “top twenty” tables based upon size saved:

TABLE_NAME PRE_COMP_MB POST_COMP_MB MB_REDUCED COMP_RATIO
BOM.CST_ITEM_COST_DETAILS 2013.25 391.75 1621.5 0.81
INV.MTL_MATERIAL_TRANSACTIONS 2334.88 774.88 1560 0.67
INV.MTL_TRANSACTION_ACCOUNTS 1641.38 526.88 1114.5 0.68
AR.AR_TRANSACTIONS_REP_ITF 962.25 136.75 825.5 0.86
MRP.MRP_AD_RESOURCE_REQUIREMENTS 808.25 212.75 595.5 0.74
INV.MTL_TRANSACTION_LOT_NUMBERS 922.5 376 546.5 0.59
WIP.WIP_TRANSACTIONS 797.75 332.38 465.37 0.58
APPLSYS.FND_LOGINS 233 10.38 222.62 0.96
MRP.MRP_AD_OPR_RESS 431.63 220.88 210.75 0.49
AR.RA_CUSTOMER_TRX_LINES_ALL 268.25 102.75 165.5 0.62
APPLSYS.WF_NOTIFICATION_OUT 132.5 0.13 132.37 1
PO.RCV_TRANSACTIONS 189.5 57.5 132 0.7
WSH.WSH_DELIVERY_DETAILS 263.5 141.25 122.25 0.46

That’s a sum of 8405.5 MB, or 8.2 GB so far in just 20 tables (all objects to date put me in the 9.2 GB range of savings). In my case, this is about 1/3 of my SGA saved, as most of these objects were 100% in the buffer cache.

It raises some interesting questions. Is the cost of Advanced Compression justified?  Right now, I would have to say, no way, because of the Dataguard bug. But let’s assume Oracle manages to fix that.  A few other considerations off the top of my head are:

  • What is the cost of instead putting another 8/16/32GB of RAM in the server(s)?
    • Does the hardware support more RAM or is it maxed out?
    • In the case of RAC,multiple instances make this expensive fast
  • What is the value of having faster RMAN backups?
    • Backups take less time because compressed blocks don’t write as much data
    • ZLIB compression option is now available and it runs faster than the old BZIP
  • Do the users even perceive a benefit in terms of items being in the buffer cache?
    • Performance may be more limited by latency across the WAN than a few disk reads
    • Likely highly variable depending upon the complexity of reports and queries
  • What is the savings in disk usage, when all the objects shrink?
    • Allocated space will go down a significant amount, across all tablespaces
    • May defer the cost of  Infomation Lifecycle Management (ILM) solutions such as Applimation
    • Multiply this savings times all environments: regression testing, dataguard, training, development, etc
  • What kind of deal can you get from Oracle?
    • Everyone’s discount is different, based on the economy, the “fire sale” end of quarter/fiscal year incentives Oracle may give you

I’m not trying to be to contrary in my analysis, but I think the only right answer is “it depends”.

July 14, 2009

No kidding

Filed under: Oracle — Tags: — kkempf @ 10:52 am

I received this email from the “Oracle Certification Program” today, regarding a salary survey which I didn’t particpate in, and found it amusing.

July 2009 Register | Contact Us

Hi Everyone,

Thanks to everyone who participated in our salary survey. We are busy working to compile, summarize and publish the results. The information will be available through the Oracle certification website and I’ll let you know through the Oracle Certification Blog when it is ready.

In the meantime however, I’d like to share some pre-release details that are interesting:

1. Certified professionals earn over 13% more than their non-certified counterparts.
2. Years of experience in a role makes a big difference in overall compensation.
3. Pay between various job roles (architect, developer, administrator) can be markedly different.
4. Professionals who have earned more than one Oracle certification earn more than those who hold one, and typically – the more certifications that you hold, the more likely you are to earn more.

I’ll discuss specific results in more detail when the salary survey results become available, and give my perspective on what it means to certification-holders and those considering certification.

I look forward to your comments and thoughts!

Thanks,

Paul Sorensen | Director, Oracle Certification

Wow. No kidding. The Director of Oracle Certification has concluded that certified professionals make more money, and the more certification you have, the more money you make. In yet another amazing revelation, he states the longer you’ve been doing the job, the more money you make. In one final observation, I’ve observed that the more certifications you have, the more money Oracle makes.

July 13, 2009

There’s a reason EM is free…

Filed under: Enterprise Manager, Oracle — Tags: , — kkempf @ 7:46 pm

I’m mostly kidding about the title of this entry; on the whole, I really like Enterprise Manager Grid Control.   It simplifies management of my Oracle databases, backups, and Blackberry notifications when something is amiss.  I don’t use the “pay” 11i pack; I rely on custom written (User-Defined Metrics, or UDM’s) SQL to keep me informed if there is something wrong in the apps.  It’s not that I’m opposed to the management packs (we use, and pay for, diagnostics & tuning, and they’ve saved me a lot of time), I just don’t see what it brings to the table for me beside another annual maintenance fee. 

Well it turns out this past weekend, for yet undetermined reason, EM stopped collecting information from all agents.  This is really obnoxious, as I didn’t even know I was “blind” to my Oracle databases.  It just silently stopped collecting.    Like a union on strike without the picket line.   It happens that I did some minor maintenance Sunday, which required me to bounce my ERP PROD database, and I was curious how quickly my buffer cache recovered, and how it was behaving.  Well surprise, surprise, all my data is stale as of 11pm Saturday night. 

A bit of background; I run EM 10.2.0.5 “Grid Control” on a RH5 Linux x86_64, with about 4GB devoted to the SGA and 2 processors, all in a virtual machine.  There’s nothing unusual about this, it’s always performed fine.

Alright, back to the problem at hand.  I did a little bit of checking, bounced some agents, even bounced the whole EM application server and database, just to be sure it was running normally.  Everything checks out.   The agents upload fine (or at least, think they did, as far as I could tell) and I have no problem doing real-time monitoring of any of my systems.  This is puzzling, and I open the perfunctory SR to see if there’s any intelligent life at home today at Oracle support.   Turns out, no.   The analyst asks me for reasonable files, such as logs from the agent.   But it’s going nowhere fast.  So I reinstall the agent on my PROD system, thinking it may be messed up somehow; this has happened from time to time.  No dice. 

Then the analyst starts asking me to do downright dumb things.  She noticed that there was an error message in the log about one of my custom metrics complaining about a trailing semi-colon at the end of my SQL.  Well this has never hurt in the past, and although I will admit that there was an error, and definitely a problem with one of my custom metrics, I failed to understand how this could have run for months and suddenly caused catastrophic failure Saturday night at 11pm (nobody had done anything to EM all day Saturday).   About this time, I gave up on the analyst, and started doing some real digging on my own. 

I looked in the EM application home, and noticed that there were a ton of .xml files in $ORACLE_HOME/sysman/recv/errors.  This didn’t look right; I didn’t really care about the metrics at this point that were “stuck” so I deleted everything in that directory.  Then I found a note which mentioned running this code as sysman against my EM repository, based on some odd “unavailable partition” errors I saw in the logs:

SQL > exec emd_maintenance.analyze_emd_schema(‘SYSMAN’)
SQL> exec emd_maintenance.partition_maintenance

Wouldn’t you know it, agents start reporting in and things return to normal.

It’s sad, because when I was a rookie, my Oracle mentor taught me to open a TAR/SR on every issue you couldn’t solve in short order, believing that 2 heads were better than one, and this was the analysts’ specialty.  It turns out, as of late, I’m about 0 for 5 on analysts solving my problems.  I don’t know if they’re overworked, underqualified, or just plain incompetent, but I just don’t have any faith anymore in support analysts.  I do much better searching on my own.   Don’t get me wrong, in most cases without Metalink I couldn’t have figured out the solution, but it pains me to see my company shell out so much money for support when all I really need is Metalink access.  I guess that’s why I get paid.

July 9, 2009

What’s your DBID?

Filed under: 11g, Oracle, RMAN, Utilities — Tags: , — kkempf @ 3:11 pm

I started reviewing recovery scenarios and realized that if I ever had to do a complete database loss recovery without my recovery catalog, determining my DBID would be problematic. So I did a little shell scripting and came up with a simple solution at the OS level. This is written for bash, on Linux, and assumes you can receive email from the server. A simple way to confirm this is to type echo “Test” | mail -s “Email Test” dba@yourcomany.com and see if it gets through. If it doesn’t, check to see if sendmail is running (#service sendmail status), or consult your system admins. Anyway, here’s the script:

cat incarnation.sh

# source your ORACLE_HOME that rman runs out of, however you do that
. /u01/appprod/oracle/proddb/11.1.0/PROD_myserver.env
RCATUSER=whatever your recovery catalog user is
RCATPASS=whatever your recovery catalog user password is
DBA=your email
# really only necessary if you’re not running this script out of the recovery catalog home
REPSID=SID of your recovery catalog

echo “list incarnation;”|rman catalog ${RCATUSER}/${RCATPASS}@${REPSID} | mail -s “RMAN DBIDs” $DBA

Now you can cron it up (crontab -e) and in the case below you’ll be updated every weekday @ 8am:
# email me the DBIDs every weekday
0 8 * * 1-5 /scratch/oracle/dba/scripts/incarnation.sh > /dev/null

Your email will look something like this:

Recovery Manager: Release 11.1.0.7.0 - Production on Thu Jul 9 15:39:21 2009
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to recovery catalog database
RMAN>
List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
23832   23839   ADV_TRN  292427552        PARENT  1          05-FEB-06
23832   23833   ADV_TRN  292427552        CURRENT 440636     02-MAY-07
46144   46151   ADV_TEST 491394006        PARENT  1          05-FEB-06
46144   46145   ADV_TEST 491394006        CURRENT 440636     11-JUL-07
46214   46221   AAD_TEST 914518601        PARENT  1          05-FEB-06
46214   46215   AAD_TEST 914518601        CURRENT 440636     12-JUL-07
...

There may be a more elegant way to do this, but I couldn’t find one. I would have preferred this to be an EM job, except there is no option under RMAN scripts to connect to the recovery catalog only (you must select at least 1 target).

South Carolina Vacation Spots

Filed under: South Carolina, Vacation — kkempf @ 9:55 am

SC
We’ve been here 4 years now, and have taken advantage (as best we can) of the wide variety of vacation options nearby. Coming from Chicago originally (where one had to drive several hours just to get away from the city), it’s a refreshing change to have the ocean, mountains, lakes, historical sites, big cities and small town festivals within easy reach.

For long weekend type getaways, we’ve had great success with Vacation Rentals by Owner.

Our first foray was a trip to Edisto island, where they say “the locals” go to the beach in SC. It left a great impression on us; quiet, small, relaxing. In many ways it reminds me of Hilton Head Island (though smaller) back in the 80’s when my family used to vacation there. Meaning, it isn’t jam packed with cars, stores, and restaurants, and hasn’t fully embraced the competition of “who can build the biggest house” which seems prevalent on Hilton Head now.

Next we headed up to the mountains for another long weekend, near Walhalla. Despite its proximity to Clemson, it was a wonderful trip. There is some wonderful scenery up there, such as  Stumphouse Tunnel Park which includes Issaqueena Falls, and the Cherokee Foothills Scenic Highway. We also found a hidden gem in High Falls County Park

Last summer, we met up with a large segment of my wifes family on Fripp Island. This was a total change of pace from Edisto and Hilton Head; it is gated and pretty much like stepping into a resort when you arrive at the island. There are few cars (golf carts and bikes are encouraged instead) all amenities are owned by the resort: the restaurants, golf courses, swimming pools, marinas, stores etc. Of the barrier islands we’ve visited, Fripp had the most natural feel to it; they work hard (and succeed at) making the island quiet, relaxing and restful. Our only regret was amenity cards. These are sold as “add ons” to the weekly rental, to the tune of $75 or $100 a person. In truth, these are not upgrades, but necessities, as without one, you cannot go out to eat without leaving the island or swim in the pool. One highlight of the week was a side trip to Parris Island, to watch the USMC graduate a class of recruits (open to the public!).

This past fall, we headed to Biltmore to see their Christmas decorations, and the whole clan went so we rented out an entire B&B in Asheville, NC. It was a wonderful arrangement, given the varying ages of the children and adults, and their respective adherence to nap times, etc. This was a lovely old house which served as the perfect “home base” for trips to the Biltmore Estate. The Estate itself was as majestic as ever, and never fails to impress. Being done up for Christmas, it was nothing like I remember inside from my one prior visit 20+ years ago. They call this place America’s Castle, and if you’re ever in the area I strongly urge you to stop and see it.

Finally, as I mentioned elsewhere in this blog, this summer we’re heading to Edisto again, this time with the whole extended family for a week. Will update with our impressions of Edisto during peak season (our last trip was in the Fall).

Older Posts »

Blog at WordPress.com.