Anda di halaman 1dari 33

Innovate Integrate Operate

X Platform Migrations Challenges and Time Reduction Techniques

Raju Kanumury
Vice President
Database Administration Services

AGENDA
Requirements Complexities Oracle Migration Path Customized Migration Path

Time Consuming Operations


Time Reduction Methods Iterations Testing Methods

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario I
Two SUN Solaris Machines to Four Linux Machines Single node DB to Two Node RAC Separation of single node admin/CM tier to Two Parallel Concurrent Processing nodes Single Forms/Web node to Two Form/Web nodes with Load Balancer

Scenario II
Seven HP Unix Machines to Eighteen Linux Machines Single node DB to Six Node RAC Separation of single node admin/CM tier to Two Parallel Concurrent Processing nodes Four Forms/Web tiers to Ten Forms/Web Tiers with Shared Application Top Integration and failover capability for third party/external software

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario I
OS - Solaris 8 to OEL 5.3 DB - 8.1.7.4 to 10.2.0.4 (RAC) App - 11.5.4 to 11.5.10 CU2

Scenario II
OS HP UX 11.11 to OEL 5.3 DB Non RAC to RAC 4.0 TB data migration to Linux

App - Latest Financial Family Pack


1.8 TB data migration to Linux Downtime Allocated 48 hrs

JDBC driver updates for RAC compatibility of third party software


Resource Management of DB by using services Downtime Allocated 40 hrs

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Commonalities between Scenario I and II


Single Node DB to RAC DB with ASM Parallel Concurrent Processing (PCP) configuration Fixed and tight downtime because of their operational requirements

Anticipated increase in user load since they were planning to add more countries as a part of global rollout
Migration and functioning of customizations Performance of critical functionalities & processes

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario I
Multiple DB Upgrades
8.1.7.4 to 9.2.0.6 9.2.0.6 to 10.2.0.4

Scenario II
Export and Import of huge amount of data
4.0 TB of Data

Application Upgrade
11.5.4 to 11.5.10 CU2

Shared Application Tier Configuration Adding numerous nodes


10 nodes and 2 CM nodes

Export and Import


1.8 TB of Data

Third Party integration that is crucial to most of business functionalities

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario - I
Source
DB Upgrade 9.2.0.6 DB Upgrade 10.2.0.4

DB Backup

APPS Upgrade 11.5.4 to 11.5.10 .2

Export Data

Target

Aplly Financial Family Pack

Parallel Concurrent Processing Configuration

Add Nodes Configuration

Import Data

Create Shell DB

Tech Stack Install/Copy

Scenario - II
Source
Clone POC2 DB/APPS from PROD Use prepared document Prepare DB for migration Export Data

Target

Third Party Integration

Parallel Concurrent Processing Configuration

Add Nodes Configuration

Shared Apps Tier configuration

Import Data

Create Shell DB

Tech Stack Install/Copy

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario I
Major Activity PreDowntime Downtime 4 6 18 6 8 8 15 4 2 4 6 4 12 73 Source DB Backup Source DB Upgrade (9.2.0.6) Source APP Upgrade Source DB Upgrade (10.2.0.4) Source Export Target DB Shell Target Import Target TechStack Target AutoConfig Target Add Nodes Target Apply Patches Verification & Validation Total Time

Scenario II
Major Activity Time in Hrs PreDowntime Downtime 4 2 15 8 32 4 2 2 6 8 4 12 75

Source DB Backup Prepare Source DB for Export Source Export Target DB Shell Target Import Target TechStack Shared Appl Top & Auto Config PCP Config Add Nodes Total 10 forms/web nodes Third party configuration Verification & Validation Total Time

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario - I
Build Standby DB

X
DB Upgrade 9.2.0.6 Parallel Concurrent Processing Configuration

APPS Upgrade 11.5.4 to 11.5.10 .2

Source

DB Upgrade 10.2.0.4

Export Data

Migrated Partial Activity (Source -> Target)

Target

Fresh Install of 11.5.10.2 Import Data Create Shell DB

Aplly Financial Family Pack

Add Nodes Configuration

APPS Upgrade 11.5.4 to 11.5.10 .2 (D Driver)

Scenario - II
Source
Use prepared document Build Stand by DB Apply Archive Logs Prepare DB for migration Export Metadata Export AllTables Export Metadata Export BigTables & FNDLOBS

Target

Third Party Integration

Parallel Concurrent Processing Configuration

Import Procs etc Add Nodes Configuration Shared Apps Tier configuration Build Indexes

Import AllTables

Import Users

Create Shell DB

Import Procs etc

Import Big & FNDLOBS

Tech Stack Install/Copy

Requirements

Complexities

Oracle Migration Path

Customized Migration Path

Scenario I Run Times


Major Activity Source Standby DB Source App Prep Source DB Upgrade Source Export Target Fresh Install Target DB Shell Target Import Target App Upgrade Target AutoConfig Target Add Nodes Target Apply Patches Verification & Validation Total Time PreDowntime Downtime 6 2 6 3 5 8 8 8 2 4 6 4 19 43

Scenario II Run Times


Major Activity PreDowntime Downtime Source Standby DB 8 Source Export Big Tables/Metadata 12 Prepare Source DB for Export 2 Source Export All Tables/Metadata 6 Target DB Shell 8 Target Import - Users/Big Tables & Indexes 8 Target Import AllTables/Sync Big Tables Target Import Procs Etc Target Import - Const & Build Indexes Shared Appl Top & Auto Config PCP Config Add Nodes Total 10 forms/web nodes Third party configuration Verification & Validation Total Time 6 3 7 2 2 2 2 4 36

36

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Scenario I
Application Upgrade
d driver

Scenario II
Size of the DB Shared Application Tier Number of nodes to be configured Third Party integrations

DB Upgrade

Backup Operation Export/Import Operations

PCP Configuration
Application Configuration

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Areas to focus
Analyze whether work can be performed ahead of downtime. Push as much as activities to pre downtime
Build Standby DB Sync tables by exporting ahead of downtime

Look whether there are any performance attributes or improvements that can be performed on an individual process
Number parallel workers (adpatch & datapump) Datapump Performance Patches Purge obsolete data

Based on DB size break the process into logical elements so that same process can be submitted in multiple threads
Break Export and Import into logical groups Customize index creation process

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Backup Operation
Problem
Time taken for backup depending on the size of DB they may take considerable time In case of upgrades or conversions restoration time need to be considered as part of roll back

Solution
Create Physical Standby ahead of conversion and start applying logs. Except applying few logs after start of downtime majority of the work for this operation can be pushed in to pre downtime category Restoration time need not be considered since original PROD system is intact

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Export Operation
Problem
Oracle given parameter files have commands for full export with some user exclusions In proof of concept exports it is observed that few big tables and FND_LOBS are taking considerable time in export Upon analysis it is observed that top 10% of tables occupy 50 to 60% of total DB size

Solution

Identify and list top 10 to 15 non transactional tables related to history and TL
Create MVIEW logs on above tables to keep track of changes

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Export Operation
Solution
Split full export into multiple parts so that few tables can be exported ahead of downtime Create Shell DB along with required tablespaces before downtime Export Big tables, FND_LOBS and Metadata by using multiple parameter file instead of one Exclude statistics from all the parameter files Use enough parallel threads for faster export. Limit parallel threads not to exceed the number of dump files that are being generated by export

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Export Operation Time Lines

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Import Operation
Problem
Oracle given parameter files have commands for full import In proof of concept imports it is observed that majority of the time is being spent on building indexes and primary key constraints Datapump serializes activities like index creation and procedures etc. which can consume lot of time

Solution
Split full import into multiple parts so that exported tables can be imported ahead of downtime Develop custom process to synch the imported tables between source an target based on the changes logged in MVIEW logs

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Import Operation
Solution
As first pass import users, table structures and data only. This way you can assure sync process will not effect any other data Except constraints other DB elements like views, triggers, procedures etc. should be imported as soon as data imports are completed Exclude indexes in all import parameter files Customize index creation by creating indexes externally. Come up with automated procedure to load indexes from dump file into a table. Write code so that multiple indexes can be created in parallel

Run import constraints in parallel with index creation which is performed external to datapump
Use enough parallel threads for faster import. Limit parallel threads not to exceed the number of dump files that are being generated by export

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Import Operation Time Lines


03:00 Start of Import 03:00 Start of Import 03:25 - 11:00 Full Import 32Hours 03:25 - 11:00 Full Import 32Hours 11:00 End of Import 11:00 End of Import

03:00 - 03:25 Import Users 03:00 - 03:25 Import Users


4/4/2010 8:01 AM 08:00 Start of Import Start of Import 8:25 AM - 11:55 AM 08:25 - 11:55 Import FND_LOBS Import FND_LOBS

Oracle Suggested Import Oracle Suggested Import

13:00 - 16:00 13:00 - 16:00 Build Indexes Big Tables Build Indexes - Big Tables

4/4/2010 3:59 PM End16:00 of Import End of Import

Custom Pre Downtime Custom Pre Downtime Import Activities Import Activities
08:00 - 08:25 08:00 08:25 Import- Users Import Users 08:25 - 13:00 08:25 - 13:00 Import BigTables Import BigTables

03:00 Start of Import

09:00 - 11:59 Import Procs

12:00 - 19:00 Build Indexes

19:00 End of Import

Custom Downtime Import Activities

19:00 - 11:00 Downtime Savings

03:00 - 09:00 Import AllTables

12:00 - 16:00 Import Constraints

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Application/PCP Configuration
Problem
Code tree needs to be copied from source New tech stack needs to installed using rapidwiz. Also developer patch sets need to be applied

Environment files and context files need to be modified to reflect correct configuration and instance
PCP settings need to be enabled. Profile options need to set correctly for PCP and managers need to be updated with right primary and secondary nodes Implement code and patch freeze - one week before go live date is preferred Conduct dry run on the future production infrastructure during that week simulating whole go live activities

Solution

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Application/PCP Configuration
Solution
Use the same naming conventions, ports, directories etc that will be used as part of future production After configuration is complete on dry run, test all important components and functionalities Upon successful testing download the manager data using FNDLOAD Create scripts to update database components that are needed as part of PCP configuration like some profile values etc.

Preserve all components except the database. Drop the database and recreate shell database to be ready for go live activity
During go live activities execute database updates, loads and autoconfig only

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Application/DB Upgrade
Problem
Usually upgrades are performed applying relevant maintenance packs which are big in size. Extraction and verification takes time and it increases this effort if a customer has MLS

Out of three drivers depending on the products used by customer, d driver can take long time to complete its activity
It is also observed that some conversion programs are the main culprits Lot of time is spent on compilation of objects by DB upgrades

Solution
Try to understand the products used by customer and critical functions in these processes

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Application/DB Upgrade
Solution
Suggest customer to purge any unused historical data related to these products In proof of concept run identify the workers which took significant time and try to tune the sql or code Create custom indexes based on the logic and drop them after processes is completed Make sure archiving is turned off on DB during application of maintenance packs Make sure that statistics related to objects being used are up to date Use parallel compile options to speed up DB upgrade

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Iteration - I
Iteration 1 (Proof of Concept) Hours Taken 71 DB Nodes: 1 App Nodes: 1

Source

Clone POC DB/APPS from PROD

Read Oracle migration notes

Prepare Document with Detailed Steps needed for migration

Prepare DB for migration

Export Data

Target

Thitrd Party Integration

Shared Apps Tier configuration

Import Data

Create Shell DB

Tech Stack Install/Copy

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Iteration - II
Iteration 2 (Verify Steps & Fine Tune Processes) Hours Taken 54 DB Nodes: 2 App Nodes: 4

Source

Clone POC2 DB/APPS from PROD

Use prepared document

Prepare DB for migration

Fine Tune DB & Export Parameters

Export Data

Target

Third Party Integration

Parallel Concurrent Processing Configuration

Add Nodes Configuration

Shared Apps Tier configuration

Import Data

Create Shell DB

Tech Stack Install/Copy

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Iteration - III
Iteration 3 (CustomizeTime Consuming Processes) Hours Taken 42 DB Nodes: 3 App Nodes: 4

Source

Clone POC 3 DB/APPS from PROD

Use prepared document

Prepare DB for migration

Export Metadata Export Data Export BigTables & FNDLOBS

Target

Third Party Integration

Parallel Concurrent Processing Configuration

Import Users Add Nodes Configuration Shared Apps Tier configuration Build Indexes Import Data Import Big & FNDLOBS

Create Shell DB

Tech Stack Install/Copy

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Iteration - IV
Iteration 4 (Parallelize Operations within Processes) Hours Taken 36 DB Nodes: 3 App Nodes: 4

Source

Clone POC4 DB/APPS from PROD

Use prepared document

Prepare DB for migration

Export Metadata Export AllTables

Export Metadata Export BigTables & FNDLOBS

Target

Third Party Integration

Parallel Concurrent Processing Configuration

Import Procs etc Add Nodes Configuration Shared Apps Tier configuration Build Indexes

Import AllTables

Import Users

Create Shell DB

Import Procs etc

Import Big & FNDLOBS

Tech Stack Install/Copy

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Iteration - V
Iteration 5 (Parallelize Operations within Processes) Hours Taken 32 DB Nodes: 6 App Nodes: 12

PreDowntime Source

Use prepared document

Create Materialized Views on Tables Expported

Export Metadata Export BigTables & FNDLOBS

PreDowntime Target

Third Party Integration

Parallel Concurrent Processing Configuration

Import Procs etc Add Nodes Configuration Shared Apps Tier configuration Build Indexe s

Import AllTables

Import Users

Create Shell PROD DB Tech Stack Install/Copy

Import Procs etc

Import Big & FNDLOB Tables

Get Target Ready for Downtime

Drop PROD DB on Target

Create Shell PROD DB

Preserve Teck Stack & Code Trees

Import Users Import BigTables & FNDLOBS

Build Indexes on Imported Tables

Downtime Source

Use prepared document

Prepare DB for migration

Export Metadata Export AllTables Export Statistics from Source

Downtime Target

Import Procs etc Third Party Integration Import Statistics into New PROD DB DB Updates related to PCP Run Auto Config to configure nodes Build Indexes

Import AllTables

Import Procs etc

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Methods Used
Functionality Testing
Can be performed by automated tools like Load Runner or Oracle Applications Testing Suite using prewritten test scripts Can also be performed by super users to assure that main functionality is working Should be part of iteration 1 and 3

Performance Testing
If the customer has any tools like Load Runner to simulate business functionalities it will be more scientific. In case of customers who do not have tools manual effort is required and testing will be limited to critical business functionalities Online Transactions and Batch jobs/processes can be tracked for actual times in existing PROD system and then can be evaluated with migrated system

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Methods Used
Performance Testing
Any processes that differ in elapsed time between existing PROD and migrated system have to be traced Tracing for online transactions can be done with help of forms trace where as for batch programs it needs to be turned on at program level All the trace files can be analyzed either by TKPROF or TRACE Analyzer for identifying the issues This test needs to be performed in iteration 3 and 4

Failover Testing
This test mainly applies to systems which have multiple nodes and failover capabilities. Ex RAC Databases, Parallel Concurrent Processing (PCP) etc

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Methods Used
Failover Testing
RAC Database Failover SQLPLUS Session Failover (Only Selects) Application Functionality with Node down PCP Failover Couple of DB nodes down CM node down CM Managers Failover from primary to secondary Load Balancer Failover Shutdown of one or multiple nodes of APPS tier

Time Consuming Operations

Time Reduction Methods

Iterations

Testing Methods

Methods Used
Load/Capacity Testing
This testing requires automated tools to simulate the load. Manual effort to replicate is very hard This is usually done in batches with different set of users covering all business functionalities to produce real time scenario Batch size of concurrent users 200, 400. 600, 1200 and 2000 Active Application Nodes 1, 2, 4, 6 and 8 Active DB Nodes 1, 2, 4, and 6 The following statistics are collected on both application and database tiers CPU/Memory/Disk Utilization Load Average Functionality or Screen Timings

X Platform Migrations

Thank You Q&A

Anda mungkin juga menyukai