Raju Kanumury
Vice President
Database Administration Services
AGENDA
Requirements Complexities Oracle Migration Path Customized Migration Path
Requirements
Complexities
Scenario I
Two SUN Solaris Machines to Four Linux Machines Single node DB to Two Node RAC Separation of single node admin/CM tier to Two Parallel Concurrent Processing nodes Single Forms/Web node to Two Form/Web nodes with Load Balancer
Scenario II
Seven HP Unix Machines to Eighteen Linux Machines Single node DB to Six Node RAC Separation of single node admin/CM tier to Two Parallel Concurrent Processing nodes Four Forms/Web tiers to Ten Forms/Web Tiers with Shared Application Top Integration and failover capability for third party/external software
Requirements
Complexities
Scenario I
OS - Solaris 8 to OEL 5.3 DB - 8.1.7.4 to 10.2.0.4 (RAC) App - 11.5.4 to 11.5.10 CU2
Scenario II
OS HP UX 11.11 to OEL 5.3 DB Non RAC to RAC 4.0 TB data migration to Linux
Requirements
Complexities
Anticipated increase in user load since they were planning to add more countries as a part of global rollout
Migration and functioning of customizations Performance of critical functionalities & processes
Requirements
Complexities
Scenario I
Multiple DB Upgrades
8.1.7.4 to 9.2.0.6 9.2.0.6 to 10.2.0.4
Scenario II
Export and Import of huge amount of data
4.0 TB of Data
Application Upgrade
11.5.4 to 11.5.10 CU2
Requirements
Complexities
Scenario - I
Source
DB Upgrade 9.2.0.6 DB Upgrade 10.2.0.4
DB Backup
Export Data
Target
Import Data
Create Shell DB
Scenario - II
Source
Clone POC2 DB/APPS from PROD Use prepared document Prepare DB for migration Export Data
Target
Import Data
Create Shell DB
Requirements
Complexities
Scenario I
Major Activity PreDowntime Downtime 4 6 18 6 8 8 15 4 2 4 6 4 12 73 Source DB Backup Source DB Upgrade (9.2.0.6) Source APP Upgrade Source DB Upgrade (10.2.0.4) Source Export Target DB Shell Target Import Target TechStack Target AutoConfig Target Add Nodes Target Apply Patches Verification & Validation Total Time
Scenario II
Major Activity Time in Hrs PreDowntime Downtime 4 2 15 8 32 4 2 2 6 8 4 12 75
Source DB Backup Prepare Source DB for Export Source Export Target DB Shell Target Import Target TechStack Shared Appl Top & Auto Config PCP Config Add Nodes Total 10 forms/web nodes Third party configuration Verification & Validation Total Time
Requirements
Complexities
Scenario - I
Build Standby DB
X
DB Upgrade 9.2.0.6 Parallel Concurrent Processing Configuration
Source
DB Upgrade 10.2.0.4
Export Data
Target
Scenario - II
Source
Use prepared document Build Stand by DB Apply Archive Logs Prepare DB for migration Export Metadata Export AllTables Export Metadata Export BigTables & FNDLOBS
Target
Import Procs etc Add Nodes Configuration Shared Apps Tier configuration Build Indexes
Import AllTables
Import Users
Create Shell DB
Requirements
Complexities
36
Iterations
Testing Methods
Scenario I
Application Upgrade
d driver
Scenario II
Size of the DB Shared Application Tier Number of nodes to be configured Third Party integrations
DB Upgrade
PCP Configuration
Application Configuration
Iterations
Testing Methods
Areas to focus
Analyze whether work can be performed ahead of downtime. Push as much as activities to pre downtime
Build Standby DB Sync tables by exporting ahead of downtime
Look whether there are any performance attributes or improvements that can be performed on an individual process
Number parallel workers (adpatch & datapump) Datapump Performance Patches Purge obsolete data
Based on DB size break the process into logical elements so that same process can be submitted in multiple threads
Break Export and Import into logical groups Customize index creation process
Iterations
Testing Methods
Backup Operation
Problem
Time taken for backup depending on the size of DB they may take considerable time In case of upgrades or conversions restoration time need to be considered as part of roll back
Solution
Create Physical Standby ahead of conversion and start applying logs. Except applying few logs after start of downtime majority of the work for this operation can be pushed in to pre downtime category Restoration time need not be considered since original PROD system is intact
Iterations
Testing Methods
Export Operation
Problem
Oracle given parameter files have commands for full export with some user exclusions In proof of concept exports it is observed that few big tables and FND_LOBS are taking considerable time in export Upon analysis it is observed that top 10% of tables occupy 50 to 60% of total DB size
Solution
Identify and list top 10 to 15 non transactional tables related to history and TL
Create MVIEW logs on above tables to keep track of changes
Iterations
Testing Methods
Export Operation
Solution
Split full export into multiple parts so that few tables can be exported ahead of downtime Create Shell DB along with required tablespaces before downtime Export Big tables, FND_LOBS and Metadata by using multiple parameter file instead of one Exclude statistics from all the parameter files Use enough parallel threads for faster export. Limit parallel threads not to exceed the number of dump files that are being generated by export
Iterations
Testing Methods
Iterations
Testing Methods
Import Operation
Problem
Oracle given parameter files have commands for full import In proof of concept imports it is observed that majority of the time is being spent on building indexes and primary key constraints Datapump serializes activities like index creation and procedures etc. which can consume lot of time
Solution
Split full import into multiple parts so that exported tables can be imported ahead of downtime Develop custom process to synch the imported tables between source an target based on the changes logged in MVIEW logs
Iterations
Testing Methods
Import Operation
Solution
As first pass import users, table structures and data only. This way you can assure sync process will not effect any other data Except constraints other DB elements like views, triggers, procedures etc. should be imported as soon as data imports are completed Exclude indexes in all import parameter files Customize index creation by creating indexes externally. Come up with automated procedure to load indexes from dump file into a table. Write code so that multiple indexes can be created in parallel
Run import constraints in parallel with index creation which is performed external to datapump
Use enough parallel threads for faster import. Limit parallel threads not to exceed the number of dump files that are being generated by export
Iterations
Testing Methods
13:00 - 16:00 13:00 - 16:00 Build Indexes Big Tables Build Indexes - Big Tables
Custom Pre Downtime Custom Pre Downtime Import Activities Import Activities
08:00 - 08:25 08:00 08:25 Import- Users Import Users 08:25 - 13:00 08:25 - 13:00 Import BigTables Import BigTables
Iterations
Testing Methods
Application/PCP Configuration
Problem
Code tree needs to be copied from source New tech stack needs to installed using rapidwiz. Also developer patch sets need to be applied
Environment files and context files need to be modified to reflect correct configuration and instance
PCP settings need to be enabled. Profile options need to set correctly for PCP and managers need to be updated with right primary and secondary nodes Implement code and patch freeze - one week before go live date is preferred Conduct dry run on the future production infrastructure during that week simulating whole go live activities
Solution
Iterations
Testing Methods
Application/PCP Configuration
Solution
Use the same naming conventions, ports, directories etc that will be used as part of future production After configuration is complete on dry run, test all important components and functionalities Upon successful testing download the manager data using FNDLOAD Create scripts to update database components that are needed as part of PCP configuration like some profile values etc.
Preserve all components except the database. Drop the database and recreate shell database to be ready for go live activity
During go live activities execute database updates, loads and autoconfig only
Iterations
Testing Methods
Application/DB Upgrade
Problem
Usually upgrades are performed applying relevant maintenance packs which are big in size. Extraction and verification takes time and it increases this effort if a customer has MLS
Out of three drivers depending on the products used by customer, d driver can take long time to complete its activity
It is also observed that some conversion programs are the main culprits Lot of time is spent on compilation of objects by DB upgrades
Solution
Try to understand the products used by customer and critical functions in these processes
Iterations
Testing Methods
Application/DB Upgrade
Solution
Suggest customer to purge any unused historical data related to these products In proof of concept run identify the workers which took significant time and try to tune the sql or code Create custom indexes based on the logic and drop them after processes is completed Make sure archiving is turned off on DB during application of maintenance packs Make sure that statistics related to objects being used are up to date Use parallel compile options to speed up DB upgrade
Iterations
Testing Methods
Iteration - I
Iteration 1 (Proof of Concept) Hours Taken 71 DB Nodes: 1 App Nodes: 1
Source
Export Data
Target
Import Data
Create Shell DB
Iterations
Testing Methods
Iteration - II
Iteration 2 (Verify Steps & Fine Tune Processes) Hours Taken 54 DB Nodes: 2 App Nodes: 4
Source
Export Data
Target
Import Data
Create Shell DB
Iterations
Testing Methods
Iteration - III
Iteration 3 (CustomizeTime Consuming Processes) Hours Taken 42 DB Nodes: 3 App Nodes: 4
Source
Target
Import Users Add Nodes Configuration Shared Apps Tier configuration Build Indexes Import Data Import Big & FNDLOBS
Create Shell DB
Iterations
Testing Methods
Iteration - IV
Iteration 4 (Parallelize Operations within Processes) Hours Taken 36 DB Nodes: 3 App Nodes: 4
Source
Target
Import Procs etc Add Nodes Configuration Shared Apps Tier configuration Build Indexes
Import AllTables
Import Users
Create Shell DB
Iterations
Testing Methods
Iteration - V
Iteration 5 (Parallelize Operations within Processes) Hours Taken 32 DB Nodes: 6 App Nodes: 12
PreDowntime Source
PreDowntime Target
Import Procs etc Add Nodes Configuration Shared Apps Tier configuration Build Indexe s
Import AllTables
Import Users
Downtime Source
Downtime Target
Import Procs etc Third Party Integration Import Statistics into New PROD DB DB Updates related to PCP Run Auto Config to configure nodes Build Indexes
Import AllTables
Iterations
Testing Methods
Methods Used
Functionality Testing
Can be performed by automated tools like Load Runner or Oracle Applications Testing Suite using prewritten test scripts Can also be performed by super users to assure that main functionality is working Should be part of iteration 1 and 3
Performance Testing
If the customer has any tools like Load Runner to simulate business functionalities it will be more scientific. In case of customers who do not have tools manual effort is required and testing will be limited to critical business functionalities Online Transactions and Batch jobs/processes can be tracked for actual times in existing PROD system and then can be evaluated with migrated system
Iterations
Testing Methods
Methods Used
Performance Testing
Any processes that differ in elapsed time between existing PROD and migrated system have to be traced Tracing for online transactions can be done with help of forms trace where as for batch programs it needs to be turned on at program level All the trace files can be analyzed either by TKPROF or TRACE Analyzer for identifying the issues This test needs to be performed in iteration 3 and 4
Failover Testing
This test mainly applies to systems which have multiple nodes and failover capabilities. Ex RAC Databases, Parallel Concurrent Processing (PCP) etc
Iterations
Testing Methods
Methods Used
Failover Testing
RAC Database Failover SQLPLUS Session Failover (Only Selects) Application Functionality with Node down PCP Failover Couple of DB nodes down CM node down CM Managers Failover from primary to secondary Load Balancer Failover Shutdown of one or multiple nodes of APPS tier
Iterations
Testing Methods
Methods Used
Load/Capacity Testing
This testing requires automated tools to simulate the load. Manual effort to replicate is very hard This is usually done in batches with different set of users covering all business functionalities to produce real time scenario Batch size of concurrent users 200, 400. 600, 1200 and 2000 Active Application Nodes 1, 2, 4, 6 and 8 Active DB Nodes 1, 2, 4, and 6 The following statistics are collected on both application and database tiers CPU/Memory/Disk Utilization Load Average Functionality or Screen Timings
X Platform Migrations