Anda di halaman 1dari 65

How to Perform SAP Enterprise

Portal Load Testing


ENTERPRISE PORTAL 6.0

Version 1.0

PUBLIC

ASAP “How to…” Paper

Applicable Releases: EP 6.0


September 2004

.
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Copyright
© Copyright 2004 SAP AG. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or for any purpose without
the express permission of SAP AG. The information contained herein may be changed without prior
notice.

Some software products marketed by SAP AG and its distributors contain proprietary software
components of other software vendors.

Microsoft, Windows, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation.

IBM, DB2, DB2 Universal Database, OS/2, Parallel Sysplex, MVS/ESA, AIX, S/390, AS/400, OS/390,
OS/400, iSeries, pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere, Netfinity, Tivoli,
and Informix are trademarks or registered trademarks of IBM Corporation in the United States and/or
other countries.

Oracle is a registered trademark of Oracle Corporation.

UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.

Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are
trademarks or registered trademarks of Citrix Systems, Inc.

HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web
Consortium, Massachusetts Institute of Technology.

Java is a registered trademark of Sun Microsystems, Inc.

JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology
invented and implemented by Netscape.

MaxDB is a trademark of MySQL AB, Sweden.

SAP, R/3, mySAP, mySAP.com, xApps, xApp, SAP NetWeaver, and other SAP products and
services mentioned herein as well as their respective logos are trademarks or registered trademarks
of SAP AG in Germany and in several other countries all over the world. All other product and service
names mentioned are the trademarks of their respective companies. Data contained in this document
serves informational purposes only. National product specifications may vary.

These materials are subject to change without notice. These materials are provided by SAP AG and
its affiliated companies ("SAP Group") for informational purposes only, without representation or
warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the
materials. The only warranties for SAP Group products and services are those that are set forth in the
express warranty statements accompanying such products and services, if any. Nothing herein
should be construed as constituting an additional warranty.

©SAP AG II
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Table of Contents

1 OVERVIEW ..................................................................................................................................... 1

2 REQUIREMENTS, GOALS, AND SUCCESS CRITERIA .............................................................. 2

2.1 Sizing Verification ....................................................................................................................................2


2.1.1 Example for Determining Req/h Requirements..................................................................................4
2.1.2 Example for Defining Requirements Based on the Number of Concurrent Users .............................5
2.1.3 Peak Load Requirements ....................................................................................................................6

2.2 Scalability ..................................................................................................................................................8

2.3 Stability......................................................................................................................................................8

2.4 Response Times.........................................................................................................................................9

2.5 Robustness and Fail-Over......................................................................................................................12

3 PROJECT MANAGEMENT ASPECTS OF LOAD TESTING ...................................................... 13

3.1 Define Project Milestones ......................................................................................................................13

3.2 Break Down Test Steps ..........................................................................................................................14

3.3 Detailed Project Milestones ...................................................................................................................15


3.3.1 Planning Phase..................................................................................................................................15
3.3.2 Single-User Tests..............................................................................................................................15
3.3.3 Small Load Tests ..............................................................................................................................15
3.3.4 EP Response-Time and System-Load Tests .....................................................................................15

3.4 Late-Start Load-Testing Issues .............................................................................................................16


3.4.1 Case I: Add a Few CPUs ..................................................................................................................16
3.4.2 Case II: Use Caching Proxy Servers on Remote Sites......................................................................17

4 LOAD-TESTING METHODOLOGIES .......................................................................................... 18

4.1 Selecting a Load Testing Tool ...............................................................................................................18

4.2 Load Testing Methodologies in Detail ..................................................................................................20


4.2.1 Single-User MS-IE Testing ..............................................................................................................20
4.2.2 Load-Testing Script Development....................................................................................................21
4.2.3 Single-User, Two-Loop Testing with Zero Think Time ..................................................................23
4.2.4 Single-User Test with Many Loop Executions and Zero Think Time..............................................23
4.2.5 Multi-User Test with Many Loop Iterations and Zero Think Time..................................................23
4.2.6 Multi-User, Many-Loop Executions Test with Think Time.............................................................24
4.2.6.1 Robustness Testing .......................................................................................................................28
4.2.6.2 Stability Testing............................................................................................................................28

4.3 Load Testing Scenario Design ...............................................................................................................31

©SAP AG III
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

4.3.1 Login/Logout Test ............................................................................................................................31


4.3.2 Navigation Steps...............................................................................................................................32
4.3.3 URL-iView Tests..............................................................................................................................32
4.3.4 Java iView Tests...............................................................................................................................33
4.3.5 Piecing together EP Expert Sizing....................................................................................................33
4.3.6 Business Scenario Tests....................................................................................................................34

4.4 Load Testing Different Enterprise Portal Landscapes .......................................................................34

4.5 Performance Test Plan...........................................................................................................................35

5 TEST PREPARATION .................................................................................................................. 37

5.1 Goals and Success Criteria ....................................................................................................................37

5.2 Business Scenario Definition .................................................................................................................38

5.3 Test Schedule ..........................................................................................................................................38

5.4 GoingLive Analysis Session ...................................................................................................................39

5.5 Baseline Testing ......................................................................................................................................39

5.6 Landscape Diagram ...............................................................................................................................39

5.7 Load Test Environment .........................................................................................................................39

5.8 Load Balancing .......................................................................................................................................40

5.9 HTTPS.....................................................................................................................................................40

5.10 External Tools.........................................................................................................................................40

5.11 Backend Systems ....................................................................................................................................40

5.12 Test User Accounts .................................................................................................................................41

5.13 Contact Persons ......................................................................................................................................41

5.14 Onsite Office Requirements...................................................................................................................42

5.15 Administrative Accounts........................................................................................................................42

5.16 Monitoring Setup....................................................................................................................................42

5.17 Preparing the Portal...............................................................................................................................43

6 SCRIPT RECORDING .................................................................................................................. 44

6.1 Browser Settings .....................................................................................................................................44

6.2 Protocol....................................................................................................................................................44

©SAP AG IV
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

6.3 Recording Log.........................................................................................................................................44

6.4 Headers....................................................................................................................................................44

6.5 Keep-Alive...............................................................................................................................................45

6.6 Script Structure ......................................................................................................................................45

6.7 Load Testing Knowledgemanagement..................................................................................................46

7 TEST EXECUTION ....................................................................................................................... 47

7.1 Before Execution.....................................................................................................................................47

7.2 During Execution....................................................................................................................................47

7.3 After Execution.......................................................................................................................................48

8 MONITORING, TROUBLESHOOTING AND ANALYZING RESULTS ....................................... 49

8.1 Monitoring ..............................................................................................................................................49


8.1.1 UNIX Monitoring.............................................................................................................................49
8.1.2 Windows Monitoring........................................................................................................................49
8.1.3 SAP J2EE Monitoring ......................................................................................................................50
8.1.4 LDAP Monitoring ............................................................................................................................54
8.1.5 Portal Monitoring .............................................................................................................................55
8.1.6 Network Monitoring.........................................................................................................................55

8.2 Troubleshooting......................................................................................................................................56
8.2.1 Debugging iViews ............................................................................................................................56
8.2.2 Troubleshooting Memory Issues ......................................................................................................57

8.3 Analysis....................................................................................................................................................57
8.3.1 System Analysis ...............................................................................................................................57
8.3.2 Test Analysis ....................................................................................................................................57
8.3.3 J2EE Analysis...................................................................................................................................58
8.3.3.1 J2EE Memory Analysis ................................................................................................................58
8.3.3.2 J2EE Log Analysis .......................................................................................................................59

©SAP AG V
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

1 Overview
Load testing should be part of every SAP Enterprise Portal (EP) implementation project as well as
during ongoing maintenance of an EP landscape. This paper explains how to conduct load testing
from a project management point of view and describes the best practices. It gives you the key
questions to ask during the planning and preparation phases, and shows you how to answer those
questions with specific tests.

This paper describes an efficient method for streamlining load testing of your EP site, and guarantees
repeatability of results through process standardization.

The questions and the methodologies to answer these questions are designed to build up confidence
in an EP landscape by checking the most common pain points so that they can be addressed in a
timely manner. The results give clear facts upon which decisions about production readiness can be
based. The remaining risks in the operational system can be pinpointed based on these results.
Proper testing can be costly:
• Tools for load testing can be expensive.
• Time for load tests has to be paid for (consulting).
• EP system landscape must be available for load testing.
The benefits are:
• Optimized performance of your EP landscape.
• Assurance of stability and scalability of the landscape.
• Ability to manage end-user expectations according to the performance levels that were
measured.
• Expert hardware sizing verification based on specific EP content of a particular customer
landscape.
• Breakdown cost analysis of specific EP content.
Because of the associated costs, it is important to include these testing activities as an integral part of
the project development process. There are considerable differences between testing individual
system functions and testing overall system performance. By testing individual EP functions in
isolation you cannot predict operational system performance. Therefore, it is important to include
explicit tasks for testing EP performance under multiple end-user load in an overall EP
implementation project plan. Similar tests should also be part of the ongoing operation and
maintenance plans.
The first two sections of this paper discuss planning and management aspects of load testing. Other
sections describe different types of tests, test strategies, test planning and preparation,
recommendations on script design and recording, test execution and analysis, and troubleshooting.
This paper does not refer to any particular load-testing tool, but focuses on generic methodologies.
Guides for using specific load testing tools are provided separately.

©2004 SAP AG 1
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

2 Requirements, Goals, and Success Criteria


Your system can be broken or overloaded easily through badly designed or poorly executed load
testing. The central question must be:
When is your system performance good enough for production?
In other words, when are the requirements met that were formulated at the start? Are you sure that
these requirements were properly reflected in the test setup and design?
Well-defined measurements and requirements allow you to define of an exit point for your load testing
activities. This means that you know when to stop testing. Such requirements can be verified early
in an implementation project, and can serve as milestones for measuring the progress of
implementation.
When a project plan for an EP implementation or a major EP upgrade is issued for the first time, this
is also time to define the performance requirements. The purpose of load testing is to verify when and
to what degree these requirements can be met.
The following list specifies questions that can be answered through load testing:
• Sizing verification: How much hardware is needed?
• Scalability: If more hardware is added, what higher load can be supported?
• Stability: How long will the EP run uninterrupted without failure?
• Response times: Will system response times satisfy end users?
• Robustness: Will the EP system survive a temporary overload without going down?

2.1 Sizing Verification


In order to answer the above questions, called the 3S2R list, clear measurements and testing
requirements must be introduced at the beginning of each implementation project. Answering these
questions must be a central goal of project implementation.
Sizing is the task of determining how much hardware capacity is needed to satisfy the load
requirements of your software implementation project. Hardware capacity is usually defined by
characteristics of the server landscape configuration:
• Number of computers
• Type and number of CPUs
• Clock rates
• Amount of RAM
• Amount of hard disk or other external storage
• Network interface bandwidth
Verifying the sizing is one of the most important features of load testing.
Another more comprehensive and business-like approach is to specify hardware capacity through
industry standard benchmark results. Such results allow you to compare the hardware specifications
of different vendors of operating systems, databases, and application server platforms, and to select
the most appropriate offering.

©2004 SAP AG 2
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Depending on the kind of software application, there are different ways to define the sizing
requirements. The different approaches for hardware sizing directly lead to the different ways to
define requirements for your load tests:
• Based on transaction volume: mostly used for ERP applications where the term
transaction is clearly defined.
• Based on number of concurrent users: also used for ERP applications when you know
how many people are using a software application, but you do not know how many
transactions are carried out.
• Based on number of requests/hour: traditionally used for Web applications. Since the http
protocol is stateless by nature, transaction and concurrent user are hard to define.
Measurements such as requests/second, however, are easy to measure on the Web server
or application server.
Since SAP Enterprise Portal is a Web-based application, the requirements for your load tests are
best defined with the requests/hour (req/h) approach. This has the following advantages:
• Req/h requirements are more Web-like and intuitive.
• Req/h requirements can be directly measured on legacy systems based on a Web-server.
This information can be very valuable for defining requirements for a new SAP Enterprise
Portal landscape.
• No think time (see below) and concurrent user requirement definitions are needed.
• Goals and requirements become somewhat independent of the special scenario definitions.
• Req/h load is always directly proportional to CPU load.

The simple definition, based on requests/hour, assumes a generic average cost factor for all kinds of
requests that can be sent to an enterprise portal. However, it was found that the relative cost factors
for the different request types vary tremendously. For example:

Request type Relative cost factor in one


particular test system
Login 6
Navigation step 1
URL iView step ~0
A complete list also covering other types of iViews (other cache and isolation levels) will be provided
in a later version of this document.
Consequently, it seems prudent to define some major request classes for which an average cost
factor can be assumed. This boils down to the following:
1. Login Requirements: User logins are usually expensive due to the content size of the Welcome
page, the underlying authentication processes, backend accesses, and the build-up of a user
session in the EP engine. The following questions about the login have to be answered:
a. Number of user logins per day: _________
b. Number of logins during one hour in the morning? _________
c. Number of work hours per day _________
2. Throughput Requirements: One request equals one click by one end-user:
An list of typical request types is shown below. This list depends on your specific configuration. The
peak number of requests per hour is required for sizing.

©2004 SAP AG 3
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Request Type Peak Number of Requests/Hour


Portal Navigation (top level)
SQL – iView
URL – iView, not cached
URL – iView, cached
Java – iView
KM – iView
With these requirements, test cases can be designed to measure CPU usage based on individual
request/hour type load.
Sometimes you need to convert req/hour to the corresponding number of concurrent users. However,
the number of concurrent users does not directly relate to the CPU usage of a system. Additional
parameters like the think time (TT) and response times (RT) have to be taken into account. The think
time is the time an end-user needs to submit a subsequent request after having received a response.
It usually lies in the range of 10 to 30 seconds. The response time is the time SAP Enterprise Portal
takes to serve an end user request (typically a “click”). Consequently, the response time includes
backend-, network-, and browser rendering times.
Specifying the number of concurrent users (cu) without the think time and response time is not
sufficient for defining your requirements. Once the think time and response times are specified, the
req/hour (see previous paragraph) can be calculated using the formula:

sec cu
req / h = 3600 ∗
h TT + RT Equation 1.

where cu is number of concurrent users


TT is the think time per request
RT is the response time per request
This formula gives the overall req/h load from all request types combined. You need to analyze
typical customer-specific usage scenarios to get a breakdown by different request loads.
The term user is used in different ways, such as concurrent user in the statement above. For clarity,
we use the following definitions in this paper:
• Named users: Users who have a login account to EP. They might or might not log in on a
daily basis.
• Logged on users: Users who are logged into EP but have “break times” in between the
times they use EP. At many customer sites, users log in only once in the morning and then
stay logged in for sporadic EP use all day long.
• Concurrent users: Users concurrently active and working in the portal with a certain TT,
typically in the range of 10 to 30 seconds.
As will be shown in the next two sections, it is very important to distinguish between logged on users
and concurrent users, because the licensing of load test tools is based on the number of concurrently
active users.

2.1.1 Example for Determining Req/h Requirements

With the following sample list of steps you can determine the req/hour requirements:
• Determine how many users work in a system on a daily basis
• Determine how many scenarios each user executes per day
• Determine the number of “clicks” per scenario

©2004 SAP AG 4
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Determine a realistic average think time


• Determine the number of work hours per day
• Estimate the average response times; use 2 sec if no estimate is known
Now assume the answers are:
• 1000 users
• Each user does 8 scenario executions/day on average
• A scenario has 12 “clicks” (= requests) on average
• Average think time is 20 seconds
• A work day has 8 work hours
• Estimated response time is 2 sec

Scenario Requirements Formula Formula Result


Total time for a request step TT + Resp. Time 22 sec
Total scenario execution time 12 steps * 22 sec 264 sec or 4.4 min
Requests/hour 1000 users * 8 (scenarios/day) * 12,000 req/hour = 3.3 req/sec
12 req/scen. / 8 work hours
Number of concurrent users Step time * req/sec = 73 concurrent users
22 sec * 3.3 req./sec =
In the above example, a load test would need to sustain a load of 12,000 req/h. An appropriate load
test scenario (containing 12 clicks) would run about 73 concurrent users with 20 seconds think time.
The scenario itself could include:
• A call to a login screen, submit login user/password (2 steps of 12 = 17%)
• 1 navigation step (1 step of 12 = 8%)
• Click on 8 URL iViews (8 steps of 12 = 67%)
• One logout (1 step of 12 = 8%)
Other scenarios can be compiled in a similar way. The requirements for the individual req/h values for
different request types could be added and used as input for sizing measurements as explained in
section 2.1.1.
In the above example, 1000 logged-on users were assumed. Further assumptions about scenario
executions and think time lead to a calculation of 73 concurrent users. Although 1000 users will be
logging into SAP Enterprise Portal during the course of one day, their usage pattern shows extensive
times of inactivity, which leads to a comparably low number of concurrent users.

2.1.2 Example for Defining Requirements Based on the Number of Concurrent Users

The following example is similar to the sizing approach based on req/h, but is based on creating an
estimate of the number of concurrent users. You start with the following questions and statements:
• How many users work in a system on a daily basis?
• How many scenarios does each user execute per day?
• How many clicks does a scenario contain?

©2004 SAP AG 5
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• How many work hours are there in a day?


These questions could also be answered as follows:
• 1000 users
• Each user carries out 8 scenario executions/day on average
• A scenario has 12 “clicks” (= requests) on average
• A work day has 8 work hours

The think time can be calculated as:


Æ TT = 8h*3600sec/h / (8 scen./day * 12 req/scen.) = 300 seconds

This would result in a test setup with 1000 concurrent users working with 300 seconds think time
between actions. The long think time, however, is just a statistical average. Although this concurrent
user approach might appear simpler (with fewer questions to ask), it has some shortcomings:
• Unrealistic assumptions about user behavior. 300 seconds equals 5 minutes think time.
Realistic end-user think times are in the range of 10 (fast) to 30 seconds (slow).
• Long think times might exceed timeout thresholds, causing additional overhead that would
not really occur. Example: If the recommended https keep-alive time-out is set to 300
seconds, connections will be closed if the think time exceeds 300 seconds.
• Long think times result in long tests: In the above example, one scenario iteration would take
one hour (12 req/scenario * 300 s TT = 3600 sec)! This is rather long compared to the 4.4
minutes execution time calculated for the req/sec based approach in section 2.1.1.
• Because the license models for most load test tools are user-based, the req/h approach is in
the range of 10 times cheaper than a statistical (concurrent user)/(think time) approach.

However, by estimating the response time as outlined in section 2.1.1, you can also calculate the
required req/h using Equation 1.
SAP recommends using the req/sec method for defining load requirements for EP. Both approaches
(sections 2.1.1 and 2.1.2) can lead to a load definition based on req/sec, which is a measurement
that can be observed by most load test tools. It is possible to reach the same ratio of req/sec with a
lower number of virtual load test users by lowering the think times. This approach, however, imposes
other limitations on your tests. You must be aware of these limitations if your think times are
extremely short or zero.

2.1.3 Peak Load Requirements

It is common that the load on a productive enterprise portal system is not constant over time, and that
a customer’s business activities cause statistical fluctuations as well as more systematic changes in
the load. Typical examples often seen with an enterprise portal system are the ESS and MSS
applications (Employee/Manager Self-Service)., in particular time sheet recording. which most
employees using ESS tend to do on Friday afternoon. The peak load from such a usage pattern can
be significantly higher than the average load.

©2004 SAP AG 6
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Often a system sized and tested for such peak load is defined as a requirement. Since performance
should not degrade on Friday afternoon during peak load time, it is most important to determine what
the peak load will be. If estimates are too low, one risks insufficient hardware sizing, which leads to
performance degradation. If peak load sizing is too high, the total cost of ownership (TCO) of building
and operating an EP landscape is high and the Return on Investiment (ROI) for the customer is
reduced.
An example: When considering peak loads, you might ask what is the highest load measured in
req/sec that can be caused by 1000 users. Theoretically the answer is 1000 req/sec if all 1000 users
happen to click at exactly the same time. However, since end-users work asynchronously, chances
that this situation will occur are very small.
The extreme example above demonstrates the need to look at peak loads at different time intervals.
The maximum average load within a certain time interval is observed over increasing time intervals.
Sample measurements for 2000 logged-in users performing time sheet entries could be:
• maximum average over one week: 2 req/sec
• maximum average over one hour: 10 req/sec
• maximum average over one minute: 30 req/sec
• maximum average over a second: 100 req/sec
This statistical phenomenon shows that the shorter the time interval, the higher the peak load. In the
above case, the long-term average load during office hours is only 2 req/sec. In the above example,
peaks of about 50 times the long-term average load were observed.
One can argue that requests that cannot be served by the enterprise portal immediately due to a lack
of resources are queued and processed when the peak time is over. A peak load of one second
would cause performance degradation for a total of only a few seconds. Based on this argument, it is
up to the customer to find a reasonable compromise between the length of the time interval for which
some performance degradation can be tolerated and the maximum peak load for which the hardware
should be sized. In the above case the decision was to size the hardware to 30 req/sec in order to
cover load peaks which might last for one minute and accept short intervals of a few minutes during
which performance occasionally might be degraded. Hardware and hardware operation costs of up to
300% were saved in comparison to second level peak load sizing. This requirement would also be
the goal for your peak load test. If you have not yet discussed these aspects of peak load, you should
do so with the EP users to evaluate the requirements for your test strategy.
Another related subject is the problem of the first day. A particular load peak may occur on the very
first day of production or after a major release upgrade or content update. As will be explained in later
sections, the client (browser) side caching of static enterprise portal Web content is of the essence
for good response time performance. The problem of the first day occurs because client-side caches
start out empty and have to be filled during the first requests made by end users. These initial
downloads might suffer from bad performance. The end user will see improved performance with
subsequent access to the same pages. Since downloading initial cache content does not cause a
large increase in CPU usage, but only in response times, there is not much that can be done about
this transient phenomenon on the sizing side. One of the best strategies is a phased GoingLive
process that involves rolling out the new EP system to different end user groups on different days. If
the ramp-up of users is done over a period of weeks or months, a customer would have the
advantage of verifying sizing estimates by monitoring the resource consumption caused by a real
production load. In the worst case of an undersized system, a staged approach might leave enough
time for corrective measures, such as adding more hardware to an environment, or continuing work
on outstanding optimization and tuning issues.

©2004 SAP AG 7
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

2.2 Scalability
Scalability is achieved in the EP system when the addition of more and/or more powerful hardware
results in higher throughput measured in req/h.

There are two ways to increase hardware capacity:


• Add more CPUs to an existing server
• Add more servers

There are two types of scalability:


• SMP (Symmetrical Multi Processor) Scalability: On a machine with many CPUs, it is possible
to achieve 100% usage of all CPUs if the load is high enough.
• Cluster Scalability: It is possible to achieve 100% CPU usage on all machines in a cluster if
the load is high enough.
• In both cases: When hardware is added, the maximum sustainable load should grow linearly
with each increase in hardware capacity.

SMP scalability might be limited by contention within a JVM process. If such a limitation is detected,
you should install multiple active EP cluster nodes on one hardware server. A ratio of 2 CPUs per
active EP cluster server node is usually appropriate, depending on the purpose and content of your
EP system. SMP scalability can be achieved in this way.
You need to set up an EP cluster across multiple hardware servers for high load demands.
Recommendations on how to set up an EP cluster environment depend on the release and patch
versions. Check the latest documentation for your release for current best practices.
Switching from a one-server node configuration to a cluster setup will cause additional overhead.
Therefore sizing verification measurements must be repeated when the new hardware is added to a
landscape. The introduction of load balancers, routers, https etc. might also add overhead and should
also trigger new sizing verification tests.
It is best to start load testing on a small scale and run more extensive tests over time. The entire
productive EP landscape, including load balancers and all other components in the infrastructure,
should only be tested at the end. We strongly recommend that you follow this “bottom-up” approach
for maximum efficiency during your load testing.

2.3 Stability
Stability is the ability of an EP system to run uninterrupted for a long time without performance
degradation. Due to the complexity of EP landscapes, which can connect to a series of backend
systems and which can contain customer-developed, plug-in Java coding, you are recommended to
perform stability tests.
A number of different problems can make your EP system unstable. The most common cause of
instability is a memory leak in the Java coding. Java iViews developed by customers must be tested
for memory leaks during quality assurance.
The following tools can help you identify the root causes of stability problems:
• JARM (Java Application Response Time Measurement)
• SAT (Single Activity Tracing)

©2004 SAP AG 8
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Introscope from Wily


• Deep Diagnostic from Mercury
Also see the OSS Notes for known stability issues.
Stability requirements could include the following:
• There should be a certain period of uninterrupted uptime during which no EP restart is
needed.
• Response times should not degrade more than a predefined percentage during uptime.
System resource consumption should not increase more than a predefined percentage.
• The number of incorrect responses should not exceed a certain maximum percentage.
Stability tests are usually run at a load level below maximum load (typically 70% or less).:
Example: A given EP system should be able to sustain 65% of the maximum load for 12 hours with a
response error rate of 1% or less. Response time degradation and hardware resource consumption
increase should be less then 5% across the whole time period.

2.4 Response Times


EP response times are very important for end user acceptance. Defining response time requirements
and verifying sizing are the two most important aspects of load testing.
At this point, it is important to have a precise understanding of the terminology used and the response
times measured. Terms like request, roundtrip, click, etc. often lack clear definitions. The following
definitions should be used in an EP context:
• A user request is usually triggered by a mouse click or by pressing Enter. This results in one
or more application request roundtrips.
• Application requests result in one or more URL requests. Web server monitors often provide
this measurement and refer to them as pages.
• An URL request results in a number of HTTP roundtrips for dynamic and static Web content
objects.
• An HTTP roundtrip results in a number of TCP/IP network roundtrips to carry out one or more
of the following operations:
o Open a TCP/IP connection
o SSL certificate handshake
o Retrieve the first data content package
o Retrieve additional data content packages if an object is larger than the network
package size.
Note that all these request types are simply referred to as requests. Do not confuse the different
types of requests, and make the type of request you are talking about clear. You can often find ways
to improve the response time by measuring the number of calls on different call hierarchy levels. You
should define end-user response time requirements sensibly. For instance, it must be clear that the
WAN (Wide Area Network) responses could require a long time due to unavoidable physical
limitations of a network and system landscape and unique business needs:
• Response time depends on the amount of content sent to the browser. More content results
in longer response times.
• Response times increase drastically when the system is overloaded. Therefore, good
response times can only be achieved for properly sized systems.

©2004 SAP AG 9
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Response times increase with increasing geographical distance between the end user and
the location of the server. This is a physical limitation caused by the finite speed of light.
• Response times depend on wait times caused by other applications (typically connected as
backends) as well as the processing time within the EP.

Following the logical chain from the browser to the backend systems, the following individual
response times contribute to the total response times perceived by the end user:
• Browser page rendering time: Depends on the amount of content and the hardware the
browser is running on.
Note: Usually the client rendering time is not included in the response times reported by load
test tools.
• Network times:
o Latency times due to limited signal speed and geographical distance
o Bandwidth constraint time
• Transfer and processing times in components such as load balancers, routers, and Web
servers
• EP processing times
• Application backend response times

Any of the above can contribute to long response times. It is important to monitor response times so
that you can measure a response time breakdown and identify the major contributors to the
excessive response time.
Browsers can cache static Web content. With good caching, response times are reduced
significantly. In extreme cases, moving from inactive to active client-side/browser caching can
improve response times by several hundred percent. Since caching implies the need to make an
initial cache download, there will be periods when response times are degraded:
• The first time access to a Web page can have a much worse response time than subsequent
accesses.
• A large number of initial accesses might give bad response times and lead to unexpected
peak load situations on the very first day during production. A staged production start that
phases in user groups over time is preferable.
• These peak phases will re-occur with each new EP release or content update.
Some numbers from typical customer EP implementations might be useful to understand the impact
of browser caching:
• Content sizes:
o several 100 KB static content per page, cached on the browser side
o approx. 10 KB dynamic content per page downloaded with each request
Î The download volume for initial versus subsequent requests is reduced by a factor of 10
• Number of HTTP requests for one end-user request:
o Approx. 20 to 30 HTTP requests for static content
o 2 to 5 HTTP requests for dynamic content

©2004 SAP AG 10
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Î The number of roundtrips will be reduced considerably (very important in WAN


environments with high latency!) when the browser cache is used and already filled.
Considerable differences can occur when comparing LAN versus WAN response times. The
response times on the LAN are dominated by response times of the EP and backend servers. WAN
response times include network times. You can estimate the WAN network times if the network ping
time and bandwidth are known and you make a few assumptions. Example:
• Server response (LAN) time = 1 second
• Data volume = 20 KB
• 5 HTTP roundtrips and 1 TCP/IP roundtrip per HTTP round trip
• Ping time is 0.08 seconds (US East Coast to Central Europe)
• Bandwidth = 100 MbpsÎNetwork latency 5 * 0.08 sec = 0.4 sec.
Î Network bandwidth time 20 KB * 8 bits/byte / 100 MBPS = 0.0017 sec
Î Total WAN response time = 1.4017 seconds total

The network bandwidth has almost no impact on response times, whereas latency times can
significantly increase response times. An exception is slow telephone modem connections where
bandwidth times can also be large.

An efficient reduction of roundtrips requires optimization:


• Proper Keep-alive configuration eliminates almost all TCP/IP open connection roundtrips.
• Expiration date tagging of mime files eliminates the check for newer version http roundtrips
for browser-cached objects.
• Compression of data reduces the number of subsequent TCP/IP data package roundtrips.
Note that HTTPS (vs. plain HTTP) increases the WAN response time due to additional SSL
roundtrips for the SSL handshake process.
When defining response time requirements, it is hard to find one size that is always suitable .Take the
following into account:
• How much content fits on one EP page, including the number of iViews, data volume, and
number of roundtrips needed to display the page.
• Will the EP be used in the LAN only or is WAN usage planned?
o If WAN access is planned: measure the geographical distance through ping times.
• Will the portal use plain HTTP or also HTTPS?
• How good are the backend response times? Will data compression be used?
The points raised in this section show that you must collect certain information to determine response
times. The information about response times is summarized in the following tables:

©2004 SAP AG 11
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

EP server location:
Number of users with LAN access to EP servers
Remote location Network bandwidth Network ping time to Number of end-users
to end-user remote location in remote location

Requests of type Upper response times for Upper response times for
LAN user slowest remote WAN access
Login/Welcome page
Portal navigation (top-level)
SQL – iView
URL – iView, not cached
URL – iView, cached
Java – iView
KM – iView

2.5 Robustness and Fail-Over


Another common concern at customer sites is what happens in case of a temporary overload on the
EP system.
• Will it be necessary to restart EP?
• Will the overload cause unplanned EP downtimes?
• Does the system recover after an overload situation?
• What happens if a J2EE node or a whole server crashes?
• Will the cluster continue to function?
You should define robustness requirements to be able to quantify these requirements. They must
then be verified with overload or fail-over stress tests. The following criteria are used:
• Define a load reference point that causes 100% CPU usage.
• Define a no-crash overload factor that serves as the test target. For example: a system load
of 300% can be achieved without server crash.
• Define a maximum overload time period during which the overload can last, for example: 5
minutes.
Be aware that response errors occur more frequently when the system is overloaded.
A system can be called robust if, after the overload phase ends, the system functions normally again
without the need to restart it.
In addition to these robustness tests, the tester can consider running fail-over tests. These tests
include shutting down individual instances or servers to observe the potential of the cluster for
recovery. While a server crash would obviously lead to errors in currently running sessions and to
performance degradation, the basic functionality of the portal cluster should be recovered during the
test.

©2004 SAP AG 12
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

3 Project Management Aspects of Load Testing


The following problems typically occur when you look at performance-related risk factors of EP
implementation projects from a practical viewpoint:
1. Load testing is performed with gaps in the requirement definitions. If these gaps only surface
shortly before the start of production, the GoingLive decision becomes very difficult.
2. No owner is defined and load-testing engineers change frequently. This leads to incoherent
results.
3. Load-testing is started too late on the project timeline, not leaving enough time to fix
problems before the production start deadline.
4. Hardware sizing is insufficient.
5. Stability is insufficient.
6. Software error issues have not been solved.
7. Artificial errors are caused by insufficient load-test scripting and/or tool features.

The following sections help to prevent such common issues.

3.1 Define Project Milestones

The first three issues mentioned above are related to project management only and can be
addressed by making load-testing an explicit task, tracked in an overall EP implementation project
plan. Multiple load-testing milestones must be defined at the beginning of a project, and these
milestones must span the entire project.
• Define the milestone requirements at the very beginning of an EP implementation project.
The definition of requirements is closely related to system landscape planning. The focus
should be on sizing. Requirements can be refined and changed during an implementation
project.
• Specify a small-scale, single-user load-test milestone. After successfully completing a
small-scale installation, manual single-user tests can verify the configuration for good
performance in parallel with typical quality assurance testing. At this point, you can also
consider running EP Baseline Load Tests according to the project document that defines
"How To Perform Enterprise Portal Baseline Testing.” This document offers a series of simple
tests with pre-defined scripts and well-defined content for your initial tests (see section 5.5).
• Specify a small-scale, multi-user load-test milestone. Once most EP content has been
implemented, small-scale multi-user load tests can test the 3S2R requirements you defined
(see section 2) early in the project.
• Specify a large-scale load-test milestone. Once a production landscape configuration and
content has been fully implemented, final large-scale load tests can be performed.
Special attention should be paid to the load-test design. Consider the following suboptimal example:
• A load test with 1000 concurrent users should be performed with 300 seconds of think time.
• The test scenario consists of 20 steps (clicks).
Î The total runtime for one scenario execution is 20*300 seconds = 1 hour and 40 minutes.
Î Usually a stress test should span multiple subsequent scenario executions; for example, 5
executions of the 1 hour 40 minute test requires a total of 8 hours and 20 minutes!

©2004 SAP AG 13
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Only one test of this kind could be performed per workday. For optimization, dozens of test repetitions
would be needed. This leads to many days of testing and a high overall cost for load testing.
This example shows that in order to increase efficiency, tests with short running times must first be
performed. It helps to define such short tests if meeting the final requirements is not targeted right at
the beginning. Problems found in small tests are also problems in bigger test scenarios and need to
be fixed before proceeding with further load tests.

3.2 Break Down Test Steps


A break down of all the steps required to satsify load testing requirements should be defined as
milestones that have to be reached at certain stages of an EP implementation project. Focus must
first be put on sizing. The primary system resource to consider is CPU capacity. If the load
requirement definition is based on requests/second (req/sec), it is possible to reduce the total test
time by reducing the number of concurrent users and think times, while keeping the req/sec constant.
An example is a load test with 100 users and 30 seconds think time. This would cause about the
same req/sec load as a 1000 user/ 300 second think time test. However, it would be around 10 times
shorter.
Pushing this idea further leads to “zero think time” load. These tests can be run quickly at the
beginning of an implementation project.
Note that a typical production think time might be in the range of 10 to 30 seconds. Diverting from
such typical production think times can lead to different performance results due to the changing
relationship between the think time and other time constants, such as J2EE session time outs and the
client-to-server connection keep-alive configuration. However, differences in req/sec throughput
between measurements with and without think time might be only around 30%. Therefore, zero think
time measurements can quickly provide some initial results. From these results you can estimate the
need and urgency of improving performance.
Another way to create smaller stress tests, which can be done early on in an implementation project,
is to work on a scaled-down system landscape. If the planned production system is a multi-node
cluster setup, testing one cluster node can also help to detect problems that affect sizing, scalability
and stability. Following this bottom-up approach is also more logical, and permits more focused and
efficient testing. It only makes sense to test a full cluster if you can make sure that the individual
nodes are running properly and are configured identically. Some typical areas for improvement are:
• Test expensive iViews or J2EE applications, which endanger previous sizing estimates.
• Track down long backend response times and contention issues, which endanger SMP
scalability.
• Locate memory leaks and other problems which might endanger stability.
• Again, remember that the prerequisite for cluster robustness is the robustness of individual
server nodes.

Response times only increase minimally with increasing load as long as the maximum capacity of a
system is not exceeded. Thus a small single-user test can identify response time ranges and help to
detect and solve problems. Additionally, LAN response time measurements can reveal server
problems such as:
• Too much and too heavyEP content
• Bad design and page layout, such as too many iViews on the initial page
• Long backend application response times
• Configuration issues
All these problems would also affect WAN response times.

©2004 SAP AG 14
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

3.3 Detailed Project Milestones

3.3.1 Planning Phase


Performance requirements should be define during the planning phase. The items shown below in
bold are mandatory.
• Maximum number of user logins per hour
• Maximum number of users logged in concurrently (not necessarily active users)
• Data volume of welcome page after login
• For all user types combined:
o Number of navigation steps per hour
o Number of clicks on iViews per hour
• LAN response times
• WAN response times depending on ping time to remote locations
• Maximum time between restarts of the EP environment
• Maximum allowed request error rate in percent of total number of requests

3.3.2 Single-User Tests

After completing development and quality assurance steps and setting up the basic production
system, you must perform single-user performance tests using some actual portal content to verify
the following:
• Best-case response times
• Fine-tuning of compression, expiration date on mime files, and keep-alive time settings
• Stability for a single user running for many iterations
• Baseline load capacity (see section 5.5) if some content is not yet available

3.3.3 Small Load Tests

After completing custom iView development and deplosing the iViews to a QA system, you must
perform small load tests on one J2EE server node:
• Best-case sizing verification tests
• Stability tests
• SMP scalability tests for up to two CPUs for individual iViews or applications

3.3.4 EP Response-Time and System-Load Tests

Test the PRD landscape on multiple hardware servers with multiple load balanced J2EE nodes and,
if appropriate, HTTPS enabled:
• Single user test to verify:
o LAN response times
o WAN response times
o Tuning (compression, mime file expiration)

©2004 SAP AG 15
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Load test to verify:


o Load balancing
o Sizing
o Stability
o Scalability
o Robustness or fail-over
o LAN/WAN response times

3.4 Late-Start Load-Testing Issues


The previous section emphasizes the importance of including EP performance milestones early on in
an EP implementation project plan. The realities of project management are not ideal. In the worst
case, performance measurements are only started shortly before production start. In such cases:
• Stress tests performed for the very first time will probably reveal severe problems that will
take unscheduled time to fix.
• Performance requirements might need to be redefined and renegotiated at a late phase of
the implementation project.
• A balance must be struck between accepting remaining business risks and spending more
effort on performance testing/improvement cycles. Your system does not need to be perfect;
it just needs to be good enough.
The following section outlines case studies that show how critical projects can be improved by
performing load-testing outside the regular project plans.

3.4.1 Case I: Add CPUs

Customer requirements:
• 100 req/sec
• 4000 user logins within one hour
Status:
• Various complex LoadRunner scripts have been developed
• There was no clear understanding of measurement metrics
• There were concerns about hardware sizing
Action taken:
• Simplification: Use only two short scripts: one for login testing, one for navigation step testing.
• Quicksizer indicates 10 times more hardware than needed. Do expert sizing based on actual
measurement results from load-testing.
o Expert sizing revealed that only 3 times more hardware was needed.
• Review how customer derived requirements. In this case customer wanted peak load sizing.
Convince customer to move from one-second peak load survival to one-minute peak load
survival.
o New requirement becomes only 30 req/sec
• Customer adds just a few CPUs to existing hardware sizing and the new requirement is now
in sync with comfortable safety margins.

©2004 SAP AG 16
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

3.4.2 Case II: Use Caching Proxy Servers on Remote Sites

Customer requirements:
• EP was installed in the USA. WAN response times of users in Australia are more than 20
seconds and need to be improved.
Status:
• Customer compares simple Java Web server site with fewer than 5 second WAN response
times with EP.
Action taken:
• Point out differences in content. EP delivers 5 times more content than the customer’s
Java/Web server application.
• Review compression settings and mime file expiration settings.
• Install caching proxy servers on remote sites.
• Consider putting lightweight content into EP, in particular onto the home page/initial page.

©2004 SAP AG 17
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

4 Load-Testing Methods
This section gives a tool-independent overview of best practice load-testing methods that satisfy the
EP requirements as defined in the previous sections. These methods should be efficient and at the
same time deliver results that are as precise and as targeted as possible in order to ensure readiness
for production. The breakdown described in the previous sections consists of performance
requirements that must be met for productive use of an EP with regard to hardware sizing, scalability,
stability, robustness and response times (3S2R, as described in section 2).

4.1 Selecting a Load-Testing Tool


Before executing your load-test method, you must select a load-testing tool. Success depends on
having the proper testing tool. This paper does not explain how to use a particular tool; instead, it
gives an overview of EP load-testing. Please refer to other closely-related guides that explain how to
use particular load-testing tools for EP load-testing.
The following are desirable tool characteristics:
• Correctness of load-testing results
• Cost effectiveness of load-testing and
• Error analysis capabilities.
Unfortunately it is not easy to optimize all three aspects simultaneously, and different tools might
have different strengths and weaknesses. The tool preferred by SAP is the most recent release of
LoadRunner 7.8 from Mercury (formerly Mercury Interactive). However, it is up to customers to
decide which tool they prefer.
The following technical features are required to simulate accurately a real-world, productive user
load:
• Correct browser cache simulation
• Cookie handling
• NTLM (if applicable)
• Support of HTTP 1.1
• Support of HTTPS
• Support of GZip compression
• Support of suppressing “check for newer version” requests for mime files with expiration date
marking.
• Connection keep-alive simulation
• Multiple parallel browser-server connection usage (the way a browser works)
Many performance values can only be measured correctly if all the above features are available.
Usually the most difficult part to measure correctly is the response time. Note that load-testing tools
usually do not include browser-rendering time for the response data. This means that the page build-
up time seen by productive end users is typically longer than the response times of simulated users
of load-testing tools.
For usability and work efficiency, the following additional features are very helpful:

©2004 SAP AG 18
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Synchronous online monitoring of stress test metrics and OS, database and other application
inherent monitors. An important example is the monitoring of CPU usage for application
servers.
• Automatical-variation of user load over time.
• Monitoring of a wide variety of HTTP traffic-related metrics such as HTTP return code
statistics and TCP/IP level response time breakdown.
• Graphical display of result data with drilldown and multiple filtering functions.
• Flexible analysis and automatic result reporting generator producing documents in a common
file format.
• Automated and flexible script design features, including parametrization, text/content checks,
etc.
Tool license costs should be balanced against the costs of work time. Depending on how much load
testing is required, a lack of features or usability can cause high costs for work time. The time it takes
to write scripts is a critical factor, for example. If this time period is short, costs for work time are
lower. Licenses for simpler tools might be less expensive, but their use might cost more in terms of
work time.

©2004 SAP AG 19
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

4.2 Load Testing Methods in Detail


The following sections describe recommended load test methods that can be applied to various
business scenarios.

4.2.1 Single-User MS-IE Testing

This first test is purely manual and does not need a load test tool. The commonly used Microsoft
Internet Explorer browser provides a cache status display. This is a good entry point for
understanding EP performance characteristics. Performing this test requires only minutes and allows
some quick checks of important performance configuration settings.
To prepare for the test, first empty the browser cache as follows:

1. Open the Internet Explorer browser.


2. Go to Tools Î Internet options.
3. Delete files and cookies.
4. Set the home page to blank.

5. Close and re-open the browser.

©2004 SAP AG 20
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

6. Call the EP login page, log in manually, and then log out.

7. Go to Tools Î Internet options.

8. Click on Temporary Internet files Î Settings Î View files.

Check the number of objects cached in the browser listing and the expiration column. If the
expiration dates are not a few days into the future, an opportunity for a first optimization is
found. Setting an expiration date on cached content saves the “check for newer version”
requests that the browser would otherwise send to the backend.

4.2.2 Load-Testing Script Development

At the beginning of performance testing, scripts should be kept as simple and as small as possible to
save script development and testing time. You start with a simple login/logout script, which checks
the following important points:
• Performance of the EP Welcome page.
This page is usually the most expensive page with the most content. In most projects, the
login times are also among the most critical.
• Basic tuning for good response times, such as enabling mime file expiration date settings and
compression.
• Proper functioning of the logout process, including closing the EP/J2EE sessions.

A login logout script consists of three simple steps:

1. Call the login page of EP.

©2004 SAP AG 21
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

2. Submit the login user name and password, which returns the EP welcome page.
3. Logout.

These three steps should be defined within a loop in a load-testing script. It is important to program a
cleanup of any session cookies into the script before calling a login page. Such session cookies could
have been left over from a previous execution of this script in the load-testing tool.
Furthermore it is important to measure the response times of every request. In load-testing tools this
is usually done by bracketing script coding belonging to one particular test with markers which take
timestamps, from which the response times can be calculated.
Another important element in scripts is sleep times between requests. This helps to model think
times. Since think time settings are arguable, SAP recommends that you use the same think time
between all steps, and use a multiplication factor at script execution time if the think times should be
varied.
A login/logout script would be structured as follows:
• Begin loop
o Remove cookies from previous loop
o Take timestamp
ƒ Execute all HTTP GET and POST for getting login screen
o Take timestamp
o Sleep for Think Time
o Take timestamp
ƒ Execute all HTTP GET and POST for getting submit login,
welcome page
o Take Think Time
o Sleep for Think Time
o Take timestamp
ƒ Execute all HTTP GET and POST for getting logout
o Take timestamp
o Sleep for Think Time
• End loop

More complex scenarios should follow the same schema (see also section 6.6).
If a script should be executed many times in parallel, you should make sure that the business data
used for different loop executions is varied. This prevents the production of artificial hotspots and
bottlenecks that would not occur in a real productive system. Varying business data is usually
computed at runtime by load testing tools. An example for business data variation in the login/logout
script would be the login user name. Since it is not realistic that the same user (name) logs in
thousands of times, it is important to vary the login user name.
An important variation of the login/logout scenario would be log in within a loop without the logout.
Such a test could determine how many users can be logged into the system in parallel. The number
of logged in users would be equal to the number of iterations multiplied with the number of execution
threads or “virtual users” of the login load test.

Note the subtle difference between these two login test variants:
• A login/logout scenario tests how many users can perform login operations in parallel.
• A login scenario without logout tests how many users can be in the state “being logged in”
concurrently, which is usually limited by the EP server node memory available for user
contexts.

©2004 SAP AG 22
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Also note that even without explicit logouts, user sessions expire if they are inactive for a certain time-
out period.

4.2.3 Single-User, Two-Loop Testing with Zero Think Time

The simplest test is to simulate one user only with two executions and a think time of zero. The
obvious advantage of this test is that it can be done very quickly and leads to some useful results. For
this test it is necessary to have good HTTP traffic logging features in the load-testing tool. If this is not
the case, skip this test method. In the logs one might check for:
• HTTP/1.1 4xx return codes, indicating some sort of functional problem with the tested
scenario.
• HTTP/1.1 304 return codes, indicating missing configuration of mime file expiration dates,
which is an opportunity for further performance improvements. The URL for the Web object
causing the 304 return codes can indicate if this object is delivered by EP or by some
backend system.
• Compression: If compression is successfully simulated, one should see this in the log data
arriving in compressed format, assuming compression is enabled on EP.

4.2.4 Single-User Test with Multiple Loop Executions and Zero Think Time

A load-test tool provides detailed information about what contributes to overall response times. A test
with one simulated user, multiple loop executions, and no think time is quickly performed and gives
better statistical average values than the previous test scenario. The number of loop executions
should be in the range of 10 to 100.
This test is performed in order to see the difference caused by executing a scenario starting with an
empty simulated browser cache in a first loop execution and then seeing the effect of a pre-filled
browser cache from the first iteration in the subsequent iterations. Usually such test indicate:
• Differences in data transfer volume with and without browser caching.
• Differences in the number of HTTP roundtrips.
• Differences in response times.
• Effects of missing tuning optimization, for instance ratio of HTTP/1.1. 200 (OK) vs. HTTP/1.1
304 return codes.
More sophisticated monitoring information might be:
• Data size breakdown of downloaded objects.
• TCP/IP level breakdown of response times, such as times for open connection, SSL
handshaking, data package retrieval.
• Breakdown of server times versus network times.

4.2.5 Multi-User Test with Multiple Loop Iterations and Zero Think Time

The first real multi-user load test should also be performed with zero think time. The simple but big
advantage is still that these tests do not take a long time. Many iterations of testing and tuning can be
done within one workday. With a zero think time, the results of this test should be regarded as best-
case results. Final results of tests with think time are usually not as good, but differences are usually
only 10 to 30%.

©2004 SAP AG 23
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

The goals of such tests are:


• Best-case, first-sizing estimate
• Test of scalability
• Test of short-time stability.
Test scenario:
• Define the number of users to be about twice the number of CPUs for EP. This usually
guarantees that you maximize the landscape processing capacity. Exceptions occur where
scenarios include long calls to backend systems by EP. In such cases more simulated users
might be required.
• Set the think time to zero.
• Perform the ramp-up test, adding one user at a time. The time interval should be 3 to 10
times the combined response times of all steps within one loop execution. The response
times can be taken from previous single-user tests.
The results to measure are:
• Maximum req/sec throughput
• Maximum CPU usage of all servers
• Response times depending on number of simulated users

These results help you to verify the requirements as follows:


• CPU usage should approach 100% at high user load. This verifies good SMP scalability. If
the maximum CPU usage is significantly less then 100%, thre might be contention and/or
bottlenecks, warranting further investigation.
• The maximum achieved requests/second is basically the capacity metric for EP. For
production SAP recommends that you size the hardware so that the desired throughput is
about 65% of system capacity.

4.2.6 Multi-User, Multi-Loop Execution Test with Think Time

Only the last test sequences should be performed with multiple concurrent users and some think
time. If the schedule time allocated for load testing is short, the tests outlined below can also be
performed with zero think time. They result in somewhat less accurate best-case results, but take
less time.
Zero think time tests have the following shortcomings:
• Part of the memory usage is on a per user session basis, and not on a per request basis.
This is due to the fact that HTTP is a stateless protocol by nature, and user session context
information needs to be kept on the backend side, even between requests.
• There is some systematic overhead processing with think time between requests, for
instance connection timeouts if the keep-alive time is exceeded. If the think time is larger
than the keep-alive time, additional requests and CPU power must be spent to re-open
connections.

• The short runtimes and small number of transactions used might not be sufficient to warm up
a system. In particular, Java Garbage Collection overhead is not taken into account properly
with single-user measurements. The picture below shows a GC trace over more than one
hour under high system load:

©2004 SAP AG 24
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Consequently, sizing, response time measurements and other metrics are more accurately
determined in test runs that include think times. However, the differences are often approx. 10% to
30%. Example:

Think time [sec] Maximum req/sec [1/sec]


0 6.5
3 5.5
30 4.8

This clearly shows that results of previous zero think time measurements reflect best cases only.
However, if the best case is not good enough, quick test runs as shown above can trigger corrective
measures in time and avoid implementation project delays.

The goals of multi-user tests with think time are:


• Stability tests
• Robustness tests
• More accurate sizing, scalability and response time measurements
These criteria are summarized as 3S2R in section 2.
The right think time is often subject to lengthy discussions and complicated considerations. Two
approaches to this problem are suggested here:
• The statistical approach
• The EP-specific approach.

©2004 SAP AG 25
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

These approaches were introduced in section 2.1. What is called statistical approach here leads to a
test method with many concurrent users and long think times of perhaps several hundred seconds.
The EP-specific approach needs fewer users and operates with shorter think times in the range of 10
to 30 seconds. The following table gives examples for both alternatives.
The statistical approach ccould work like this:

Requirement Example Value Result/Test Design Parameter


Users working on EP on a daily 1000 users
basis
How often does one user 8 scenario executions per day
execute the test scenario per
day?
“Clicks” needed per scenario 12 clicks per scenario
execution
Working hours per day 8 working hours
Calculated think time 8 hours work time *3600 300 seconds
sec/hour /( 8 scen./day * 12
req/scen.)
Total execution time for one 12 clicks * 300 sec think time 3600 sec = 1 hour
scenario

Advantages of the statistical approach are:


• Easy questions and calculations
• Match the existing SAP GoingLive Analysis questions and sizing procedures.
Disadvantages are:
• Unrealistic assumptions of user behavior.
o 300 seconds (equals 5 minutes) as think time between “clicks” is not what a user
actually does. Realistic end-user oriented think times are in the range of 10 (fast) to
30 seconds (slow).
• Large think times might exceed timeouts and cause overhead from timeouts. This is not likely
in reality.
o Recommended HTTPS keep-alive time is 300 seconds. If the think time exceeds 300
seconds, connections are more often closed in a test than in reality.
• Tests times are long.
o One scenario iteration time = 1 hour! Multiple iterations are often needed.

The EP-specific approach could work like this:

Requirement Example Value Result/Test Design Parameter


Users working on EP on a daily 1000 users
basis
How often does one user 8 scenario executions per day

©2004 SAP AG 26
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

execute the test scenario per


day?
“Clicks” needed per scenario 12 clicks per scenario
execution
Working hours per day 8 working hours
Realistic real-world average 20 seconds
think time
Estimated or measured average 2 seconds
EP server response time
Total time for execution of one 20 sec. think time + 2 sec. 22 seconds
scenario step response time
Execution time for one scenario 22 sec. scenario step time * 12 264 seconds = 4.4 minutes
steps (clicks)
Requests per second on EP (1000 users * 8 scen./day * 12 3.3 req/sec.
server side req/scen. ) / (8h / 3600 sec/h)
Number of concurrent users 22 sec total step time * 3.3 73 concurrent users
req/sec

An EP-specific test would run 73 concurrent users with 20 seconds think time.

The advantages of the EP-specific approach are:


• Much lower number of concurrent users required.
• Faster test execution due to shorter think times (1000% difference).
• Lower costs for a stress test tool license that is usually based on the number of concurrent
users (1000% difference).
• More realistic reflection of real user behavior.
The statistical and EP-specific think time approaches lead to the same req/sec load by design. At first
sight they lead to the same results for sizing and other metrics. However, you must consider that the
actual user will have long breaks between scenario executions.
With the above example you get the following results for total scenario execution time (TE):
TE = 8 scenarios/day * 12 req/scenario * (20 sec TT + 2 sec resp. T)
= 2112 sec
The length of breaks (LB) is:
LB = ( 8h*3600 s/h work time/day – TE) / 8 scenarios/day
= 3336 sec
= 56 minutes

Thus, the more sophisticated advantages of the EP-specific approach are:

©2004 SAP AG 27
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Length of breaks:
If the iteration part of a test script includes login and logout, J2EE sessions are closed and
rebuilt with every iteration cycle. This better matches production because break times are
usually larger then the default 30 minutes J2EE session timeouts. Therefore, sessions have
to be rebuilt in production for each scenario excecution
• Short TT:
Since keep-alive settings are often greater than think times (300 seconds keep-alive times
are recommended by SAP for https), the EP-specific approach does not cause unrealistically
frequent connection closures and reconnects.

With the difficult and complex relationship between concurrent users and think time, communicating
the number of users with the customer and other parties is often difficult and can lead to
misunderstandings and misinterpretations of agreements.
Therefore the following terminology might be used for EP:
• named users: users who have a login account to EP. They might or might not log in on a
daily basis.
• logged on users, users who are logged in but have break times between the times they use
EP. At many customer sites, users log in only once during the morning, and then stay logged
in for sporadic EP use all day long.
• concurrent users: users who are working in the portal with a certain think time, typically in
the range of 10-30 sec.

Most tests with a think time would be designed analogously to those that re described for zero think
time and just run.

4.2.6.1 Robustness Testing

A new test for with think time would be a robustness or overload test.
The underlying questions and motivation for such a test are:
• Will EP survive and not crash after a brief period of system overload?
• In what way is performance diminished during an overload phase?
The test scenario would resemble a ramp-up test with a maximum number of users that is approx.
100% above the sustainable capacity maximum (such as 400%).

After completing a robustness test, you should verify either manually or with a single-user test that EP
is back to normal performance. Some response errors during the overload phase are normal. The
number of errors increases as the overload factor increases. This is also normal and mostly due to
timeouts and request queue overflows. Again, it is important to show that the EP processes do not
crash. It should not be necessary to restart EP after the test.
A successful robustness test can significantly improve customers’ trust and confidence in their EP
system.

4.2.6.2 Stability Testing

The purpose of the stability test is to check if an EP system could run for a long time without
unplanned down times and without performance degradation.Test Scenario:

• Define a number of concurrent users with a think time at which system load is about 60-70%.
This load point can be found from previous ramp-up tests.

©2004 SAP AG 28
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Ramp-up the users quickly within about 5 to 10 minutes.


• Have the test run with a constant load for 6 to 12 hours (preferably overnight).
• Monitor as many different metrics as are meaningful (response times, req/sec, CPU usage,
but also thread count, network traffic).

The test is successful if:


• The test runs to its end without EP going down.
• There are no severe error messages in EP and J2EE logs.
• Most loop executions succeed and the failing request rate is less than 1%. (In the Web world
it is hard to really get to 0% response errors).
• All metrics monitored during the constant load phase should also be constant with just
statistical fluctuations around an average value.
Any trend upward or downward for monitored metrics usually means that the system will hit the
ceiling or bottom of some resource after a long run-time and crash! Corrective actions are required in
this case.

The errors found with stability testing are:


• Java memory leaks.
• Backend, ldap, database connectivity problems under load.
• Network problems occurring under load and causing response errors.
• Load balancing problems.

Java memory leaks are hard to detect with the usual OS-based process memory monitoring. The
JVM pre-allocates a large memory heap and then does its own memory management within that
heap. From the OS side the JVM memory usage always looks constant.
Memory leaks build up internally in the JVM and eventually crash the JVM process with out of
memory errors. The JVM memory management can be traced by applying the “–verbose:gc”
parameter to the JVM parameters. SAP recommends that you use this trace all the time in production
systems. The trace overhead is negligible.
A drastic memory leak traced using –verbose:gc might look like the following chart:

©2004 SAP AG 29
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

The memory leak builds up over time and the memory gain after garbage collection diminishes until
the JVM eventually crashes.

Corresponding to the memory loss, the garbage collection times increase and reduce Java
performance before the crash.
If a memory leak is detected, you should:
• From the measured memory leak, calculate how long it will take for a memory leak to cause a
crash.

©2004 SAP AG 30
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• If that time is very long (weeks, months) it might not be necessary to do anything, because
EP is restarted at shorter time intervals and would reset memory leak losses.If a memory
leak is serious, try to narrow down the URL calls or IViews from which it comes by splitting up
your script into shorter independently-tested sections, and sending the problems found to the
responsible iView developer.
• Use Java profiling tools to find the Java coding that is leaking.

4.3 Load-Testing Scenario Design


In the previous sections different load-testing execution scenarios and their purpose were explained.
Another possibility is to decide which business scenarios should be tested.
The following scenarios are suggested and will be explained:
• Login/logout tests
• Navigation step tests
• URL iView tests
• Java iView tests
• Business scenario tests

4.3.1 Login/Logout Tests

The main requirement for EP is to support a large number of named and logged in users. This
requirement leads to two questions:
• How many users can log in within an hour?
Business case: Some 1000 employees arrive at work during the morning and login routinely
to EP, because it is the launch pad to many other applications.
• Is the JVM server node memory large enough to hold the session context for the logged in
users?

The following login/logout test scenario is suggested:


• Record a script which has just two steps within one loop execution:
o Call login page
o Submit login and go to the EP Welcome page
• Parametrize the login user randomly from a large list of users.
• Run this scenario with a few virtual users (users executing the script) without think time and
measure the req/sec rate at about 20% system load. The low 20% value is chosen because
other activities beside login would also need resources in production.
• Use this req/sec rate to estimate how many logins can be performed per hour.
• Do not run your script for too long. If your login rate is high, you might open more new
sessions than required. The Java memory might be exhausted by creating more concurrently
open sessions then you would ever have in production.

With zero think time and only two steps in the script, this test is quickly performed. It can be also
performed very early during an EP implementation project.

©2004 SAP AG 31
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

This test should be repeated whenever the Welcome page content changes.

4.3.2 Navigation Step Tests

EP supports an SAP SSO cookie that is often configured to be valid for at least one whole working
day. Users usually log in once during a day. The next most frequent activity in EP is to navigate to
different iViews.
A test scenario for navigation steps could be as follows:
• Call login page and enter user credentials before loop execution.
• Perform different typical navigation steps within one loop iteration.
• Log out after loop execution.
• Execute the ramp-up test with zero think time for the purpose of sizing.
• Perform a stability test with think time to check for memory leaks. Navigation steps lead to
the entry pages of iViews; as a result you might be able to detect problems.

4.3.3 URL iView Tests

URL iViews are iViews that link up to other applications with their own built-in Web-server. An
example would be ESS/MSS with ITS.
Since such applications deliver completely rendered Web pages, the load on EP from such iView
calls is very small or even zero because URL iView requests might bypass EP entirely. This can be
seen in the URLs usually recorded in load-testing scripts:
• EP URL:
URL=https://entportal:1443/irj/servlet/prt/portal/prtroot/com.sap.portal.navigation.portallauncher.d
efault
• URL iView URL:
URL=http://sapweb:1080/SMI/HotOffThePress/AAF/AAF_030112004_AlertAPOSNP.html

Note that URL iView stress tests might put a large load on application systems. These application
systems might be productive systems at the customer site, even if EP is not yet productive. They
might not be performing perfectly themselves either. Avoid accidentally causing productive
application systems to fail through EP load testing.
Test scenario:
• Log in and navigate to an URL iView before loop execution.
• Perform steps within an URL iView, for instance submit a time sheet in ESS/ITS.
• Log out after loop execution.
• Execute a script with 2 to 4 users to avoid overloading the backend systems. Do not define
think times for these users.
• Measure CPU usage of EP and the backend application server to demonstrate that costs on
EP side are minimal.
Test results should be:
• Demonstrate to the customer that an URL iView costs little or no resource consumption on
the EP side.

©2004 SAP AG 32
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• This test can be used in cases where an “EP problem” is not caused by EP, but by a backend
application system.

4.3.4 Java iView Tests

These tests are particularly important because:


• Java iViews are essentially small Java programs executed within the Portal Runtime.
• Since programs can be large or small, Java iView sizing is not predictable during the sizing
process, and should be performed with measurements for individual cases.
• Java iViews are the primary source of EP problems:
o They are the most frequent cause of memory leaks
o They might connect to external software components, and connection handling might
cause bottlenecks and hang-ups.
o All Java IView problems endanger EP because they are executed within the
EP/J2EE/JVM process stack.
Java iView test scenarios should be analogous to the URL iView tests described in the previous
section. Test execution should be as follows:
• Ramp-up test with zero think time for sizing
• Stability test with think time
Desired test results are:
• Sizing and confirmation of stability for Java iViews.
• Performance and stability problems addressed to Java iView development.

4.3.5 Piecing Together EP Expert Sizing

The four test scenarios outlined above (login/logout, navigation steps, URL iView steps, Java iView
steps) deliver the necessary results for expert EP sizing based on customer requirements.
An example is shown below in the following table.

Test scenario Metric Result


Login/logout 500 logins/hour 10% CPU
usage
Navigation steps Additional 3000 navigation steps/hour Additional
10% CPU
usage
URL iView steps Additional 10,000 URL IView steps/hour 0% CPU usage
Java iView steps Additional 2000 Java-IView steps 10% CPU
usage
The steps/hour input must come from customer requirements. The CPU usage comes from individual
measurements.

©2004 SAP AG 33
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Altogether, 30% CPU usage is required for a total of 15,500 major operations per hour based on
tested hardware. This leaves a comfortable safety zone for sizing and requirement errors. Note that
this sum is only correct if the total CPU load is not too near to 100%!

4.3.6 Business Scenario Tests

All the tests discussed above have the advantage that they can be performed quickly and easily.
They help to detect, address and solve issues as early as possible during an implementation
process. Sizing is also verified. After successful tests, customers are usually highly confident that the
EP system is ready for production from a performance point of view. Nevertheless, for
comprehensiveness and a better level of understanding, it might be helpful to add a full business
scenario test.
The suggested schema for such a test is:
• Log in within loop execution.
• Perform some navigation and iView steps as an employee would perform.
• Do not logout, since an employee would not do that during a business day either.
• As shown above, calculate the number of truly concurrent users with small think times in the
range of 10 to 30 seconds.
• Execute a ramp-up with think time, robustness, and stability tests, and then derive sizing,
scalability, and response-time information.

4.4 Load Testing Different Enterprise Portal Landscapes


Load-testing methods and scenarios were discussed in the previous two sections. A third aspect of
testing is how to proceed with load-testing as the EP landscape grows. The focus here on a
systematic bottom-up approach, if possible.
Start with simple tests on one J2EE node on one hardware server. Then work your way up toward
multiple J2EE cluster node configurations on multiple machines. Finally, include load balancers, WAN
access, and such features as HTTPS in your testing. Employing such a step-by-step approach helps
you identify and analyze potential problems systematically and more efficiently.
A full cluster test only makes sense if you can be sure that each individual node is working and
performing properly, and that there are no significant differences between them when they are
measured separately.

©2004 SAP AG 34
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

4.5 Performance Test Plan


Performance testing should be done during the whole implementation project time. A suggested
performance test plan with time estimates is shown in the following table. How often a certain test
needs to be repeated depends on the problems found.
The table below considers four phases of an EP project such as:
• Installation of EP finished
• Some content is deployed in EP
• All major iViews, in particular Java iViews, are deployed
• 2 weeks before the GoLive date
The execution times are broken down by engineer worktime and load runner machine time. The
machine time might be at least partially unattended, for instance for long-running stability tests
overnight.
An EP implementation project plan should include some similar kind of performance test planning.

Number of test repetitions Time needed / hours


during phases of EP schedule
Test scenario Install First Deploy 2 weeks Time for Runtime Total Total
content iViews before creating script for testing work test
GL once/h +test once/h time/h runtime/
work time h
Login/out test 3 3 3 1 1 9 9
0 TT ramp-up
Login/out test 2 2 2 1 12 6 72
some TT,
stability
Navigation 3 3 3 1 1 9 9
steps
0 TT, ramp-up
Navigation 2 1 2 2 4
steps
0 TT,
robustness
Navigation 2 2 1 12 4 48
steps
some TT,
stability
URL IView 1 1 1 1 1
steps
0 TT, 2 user
ramp-up
Java IView 3 3 1 1 6 6
steps
0 TT, ramp-up
Java IView 3 3 1 12 6 72
steps
some TT,
stability
Java iView 3 3 1 2 6 12
steps

©2004 SAP AG 35
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Number of test repetitions Time needed / hours


during phases of EP schedule
Test scenario Install First Deploy 2 weeks Time for Runtime Total Total
content iViews before creating script for testing work test
GL once/h +test once/h time/h runtime/
work time h
Robustness
Business sc. 2 3 6 6 12
some TT, ramp-
up
Business sc. 2 3 12 6 24
some TT,
stability

Totals 3 9 21 20 15 62 61 269
Work day total 7.63 13.45

©2004 SAP AG 36
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

5 Test Preparation
The planning and preparation phase takes place before you actually go onsite for your test. This
phase allows you to run the test faster and more efficiently by avoiding or reducing hold-ups during
the actual testing.
If you are going to run a comprehensive load test just before the planned GoingLive date, it is
advisable to make sure that the landscape under investigation is as close to being finalized as
possible to produce representative and meaningful results:
• The hardware and software versions as well as the content have been finalized. If you are
going to run KM load tests, there must be a sufficient number of documents or content in
general in the repositories.
• Fine-tuning guides have been applied.
• There are no major open issues that could be a showstopper for your load test. Checking
open OSS messages or talking to your technical contact on the customer side can clarify this.
• The full operation of all portal components (pages, iViews, external services, etc.) is a
mandatory requirement. Example: if certain URLs cannot be retrieved (code 404), the
simulated load can differ substantially from the real workload.
In the worst case, if any of the above points change during or after the test, it could be necessary to
run the load tests again!
You want to pay attention to the following points during your test preparation. The more you can
cover before going onsite, the cleaner your start will be. It is not uncommon, however, that
preparations will take up one or two additional days of your onsite visit.
If possible, send the Load Testing Questionnaire that comes with this Best Practice paper to the
customer or fill it out with the customer’s help. The questionnaire addresses the most important points
that are outlined in the following section.

5.1 Goals and Success Criteria


Define goals and success criteria for the test you are going to carry out. As discussed in section 2 of
this paper, requirements should be defined according to the 3S2R principle: Sizing, Scalability,
Stability, Response Times and Robustness. In a real EP implementation testing project, the following
tasks are often added and needed:
• Fail-over behavior: A load test can generate the necessary load for testing proper functioning
of failover features.
• Troubleshooting of problems which only occur under load: This is an important task. During
complex implementation projects, some issue is likely to show up. Reproducable load testing
can speed up problem solving and keep the implementation project on track.
• Identification of performance bottlenecks, for instance in customer specific developments or
in backend systems

Define goals and success criteria that everyone agrees on, including you. The goals defined through
preliminary correspondence or conference calls determine which tests are to be run during your
onsite stay.

©2004 SAP AG 37
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

5.2 Business Scenario Definition


Define the load and the basis for your scripts by identifying and documenting relevant action
sequences in cooperation with the customer.
For the relevant scenarios start out with simple tests as described in section 4.
Identify user groups, their roles and their critical actions (e.g. the most important business processes)
by simulating a user with respect to his role and typical usage patterns. If applicable, include a good
balance of TREX searches, calls to custom iViews and requests to KM, and pay particular attention to
frequently used iViews and those actions for which you expect a high resource consumption:
• Login
• Uncached content
• Administrative work on large repositories and large LDAP resources
• TREX search
• KM actions
Focus on creating realistic scenarios that mimic real user behavior. However, keep in mind that load
testing cannot simulate all possible scenarios for functional testing. For efficiency, limit the number of
different scripts to 3 and prioritize their execution so that you have some results, rather than none at
all, if you run out of time.
It proved viable to document the scenarios in the form of tables (e.g. with Excel). The granularity of
the scenarios is a user click or action, i.e. use the script building blocks of your load test tool
(transactions, checkpoints, timers, etc.) to frame requests that are triggered by a single click or user
action (like submit login). Choose self-explanatory names for easy reference (e.g.
“Click_TLN_MyPage”). Some tools provide additional features to group the individual clicks on a
higher level. This allows for a modular approach when creating more complex scripts. Some simple
examples were given in section 4.3

Pay special attention to the definition of think times, as discussed in section 4.2.6. Keep your think
times short if the EP implementation team can agree on this. Short think times reduce test times and
are therefore more efficient. SAP strongly recommends that you randomize think times based on the
value you chose in your script(s), given that your tool of choice supports this feature. This can help to
avoid a “marching armee effect”, where all users synchronize on otherwise unimportant contention
points and then do the next step at exactly the same time, leading to brief system overload times.
Random think times would better simulate user requests of live productive systems.

5.3 Test Schedule


Evaluate and negotiate the available time frame for your tests:
• Is the portal used in production? Can it be restarted at will?
• Can the tests be run at any time or is there a time window for the tests (e.g. after office
hours)?
• What type of tests do you have to schedule (for instance long-running stability tests? How
long are they going to run?)
See if your tests fit in the time windows provided.

©2004 SAP AG 38
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

You might also consider performing test script development and single user tests (see section 4.2) at
times where there are other activities in your test system. Script development and single user tests
hardly put any significant load on the system, but allow you to measure some performance data.
However, for multi-user load tests, the enterprise portal landscape should be exclusively available in
order to get valid measurements of system resources, such as CPU usage.

5.4 GoingLive Analysis Session


Schedule a GoingLive Analysis (GA) session. This standard maintenance session verifies system
sizing, runs parameter checks, and gives recommendations on how to optimize system performance.

5.5 Baseline Testing


Ask to have baseline performance tests run with LoadRunner or OpenSTA according to the Guide
Enterprise Portal 6.0 Baseline Performance. This guide consists of a package containing selected
standard content plus a pre-defined set of scenarios and scripts for both OpenSTA and LoadRunner.
It recommends a step-by-step approach for a set of simple basic standard load tests and gives
guidelines on how to monitor and identify bottlenecks within SAP Enterprise Portal and the J2EE
Engine during the tests.
These tests can be run on a freshly installed enterprise portal system and help to tackle basic issues
in the early phases of an implementation project. However, as these tests are of a basic nature, they
cannot replace other tests as described in previous sections.

5.6 Landscape Diagram


Ask the customer for a detailed landscape diagram as well as hardware and software specifications
of the different servers. The diagram should include:
• Servers, their hardware specifications, software versions, names and IP addresses
o Portal servers
o Servers that will be used to run the load tests and generate the load
o Backend servers
o Directory servers
• Firewalls
• Load balancers
• Proxy servers
• Network connections, segments and possible bandwidth restrictions. Will the portal be used
in a WAN environment?
If servers and/or clients (browsers of end users) are in different geographical locations, this should
also be explicitly shown in the landscape diagram.

5.7 Load Test Environment


Verify that the necessary hardware for the load test environment has been provided.
• Are there firewalls that could interfere with your tests?
• Is it possible to collect the server performance data from the load generator?
• Are there network bandwidth restrictions and hops?

©2004 SAP AG 39
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Is there network latency?


• Has the load test software been installed and licensed?
o What is the period of validity?
o Is the scope of the license sufficient for your tests (e.g. are there enough virtual
users)?
• What are the hardware specifications?
• Are thre sensitive backend systems that might be used in production which might be
impacted by load testing?
It is important to check where the machines for your load test were or will be placed. If there are
firewalls that could interfere with test execution, you might have to open the right ports in the firewalls
or, in extreme cases, move the servers closer to the Enterprise Portal servers. The farther from the
portal your load-generating servers are, the more complex your analysis will become, as you will also
be testing more components in the customer’s infrastructure. In order to focus your testing on the
Enterprise Portal servers, place the load-generating machines right in front of the Enterprise Portal
servers.
The machines you will be using to generate load must not themselves place a bottleneck on the test.
A single PC may be insufficient, depending on the generated load, the employed load generator
tools, and the number of targeted simulated users. See the help or documentation for the load test
tool of your choice to get an idea of the hardware requirements.

5.8 Load Balancing


Is load balancing in place? If yes, which strategy is used for affinity (cookie or IP-based)?
If the customer is using IP-based load balancing, you will need a range IP addresses for IP spoofing
within your load test tool (if this is supported). Ideally, this will be a subnet of 256 addresses.
It is also very common that the load distribution was never verified and needs to be checked through
separate tests or checks within your load testing scripts.

5.9 HTTPS
Is HTTPS being used? If yes, plan to simulate this in your test as the secure protocol puts additional
stress on the server CPUs (~10%) and causes many more roundtrips between client and servers
during operation. This is can be highly critical in environments with high latency.

5.10 External Tools


Is the installation and execution of analysis tools allowed, e.g. network monitors, tools from
Sysinternals etc.? Ideally the load-testing tool should be able to interface with external monitors such
that load data and EP resource consumption data, for instance CPU usage, can be easily correlated.

5.11 Backend Systems


Identify connected backend systems or external sources as potential performance bottlenecks. Either
exclude them from the test or include them in your monitoring strategy if you are planning a
comprehensive test for the complete landscape.

©2004 SAP AG 40
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Since your load test will probably focus on the performance of the Enterprise Portal, it is usually
recommended to remove external links from the test scripts. This is especially critical if some of the
backends accessed during the test are productive servers. However, for end-user response time
perception it does not matter if response times are caused by EP or backends. Backend URLs should
be included for response time measurements. A good load testing tool might be able to help get
response time breakdowns by URL. If you include external systems within the company’s intranet,
make sure that the respective system administrators are aware of what you are doing.
In the case of backend connections (ITS-based WebGUI transactions, mail servers, directory servers,
etc.), most activity bypasses the portal and puts stress on other systems. In the most extreme case,
separate load tests for the backend systems have to be considered. Other actions that access
backend systems might require protocols other than HTTP.

5.12 Test User Accounts


Ask for a sufficient number of test users and passwords. The bare minimum for scripting is one user
account per role under evaluation. User IDs and corresponding passwords should be provided as a
text file to allow for easy integration into your scripts.
The closer the number of IDs/passwords is to the actual user base, the more realistic is the load on
the backend performing the authorization and for creation of user sessions in the portal. If an
insufficient number of test accounts is provided, note this as a limitation in your report.
User management, personalization settings, PCD and the Corporate and Portal directory greatly
profit from caching mechanisms if the same accounts are used over and over again. The larger the
named user base and the smaller the numbers of test user accounts/virtual users, the greater this
impact. Real stress can only be achieved if a large number of different users force access to the
backend system involved in the authorization process (corporate directory server) and retrieval of role
and personalization information (portal directory server, PCD).
A large number of user accounts can also be created mechanically by designing and executing a
suitable create user test or important batch files with user data via the Enterprise Portal.

5.13 Contact Persons


Identify important contact persons for the most critical areas during load testing. You should produce
a list containing names and contact information for the following areas:
• Test goals and success criteria definition
• Workload definition, scenario and click path definitions, business processes
• Hardware administration
o Portal server
o Database
o Directory servers
o Firewalls
o Network
o Load balancer
• Software administration
o Operating system
o Portal content
o Portal operation

©2004 SAP AG 41
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

o Database
o Directory servers
o Firewalls
o Network
o Load balancer
• Load testing software and hardware
• Onsite office arrangements

5.14 Onsite Office Requirements


For efficient test execution and troubleshooting you and your team need an onsite office that meets
the following requirements as closely as possible:
• Workplace for each member in one room
• PCs or laptop connections to the local intranet
• Active network connections to all relevant servers including test environment and the internet
o Remote connection software and access data to all servers and clients
o Open OSS connection

5.15 Administrative Accounts


You need administrator access to various systems to work efficiently. These include:
• Portal servers
o Operating system (this is important if you are using remote OS monitoring on
Windows machines)
o J2EE Engine
o SAP Enterprise Portal
• Database
• Load test servers
• Firewalls
• Load balancers
• Local PC
• Backend systems
Make sure that you can access the consoles of all servers involved. Remote access with NetMeeting,
pcAnywhere, or Windows Terminal Server Client is recommended.

5.16 Monitoring Setup


You will be testing with a blind eye if you do not have sufficient monitoring in place. Only with
thorough monitoring will you be able to nail down potential bottlenecks and observe system behavior
under increasing load in order to assess its limits.

©2004 SAP AG 42
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Most commercial load testing tools provide monitoring of their proprietary metrics and monitoring of
operating system performance counters. Make sure you read the requirements for setting up these
monitors and address them in time. Windows operating system monitoring based on perfmon, for
example, requires an administrator account on the machine you want to monitor and open NETBIOS
ports (137-139 and 445). If you are monitoring several machines independently of one other, you
want to synchronize system clocks (e.g. via net time, ntp or other tools).
Certain counters should be active on all machines involved - this includes the load generating
machines - and recording of these counter must be activated.
The minimum set of counters for Windows-based operating systems should include:
• Processor --Total% processor time
• Memory -- Available Mbytes
More extensive monitoring would encompass the following counters:
• Memory -- Page faults/sec
• Network Interface -- Bytes total/sec
• Network Interface -- Current bandwidth
• Physical Disk -- %Disk time
• Processor -- %Processor time for individual processes (e.g. java.exe)
Note that Memory Available Mbytes gives no insight into the memory management within Java VM.
For in-depth monitoring of the J2EE Engine and Enterprise Portal, please refer to section 8.

5.17 Preparing the Portal


It often pays off to have a closer look at SAP Enterprise Portal before the first important test run.
Potential pitfalls for load tests and bad performance results can be identified proactively by carrying
out the following actions:
• Verify server-side iView caching and see if it makes sense for the particular iView under
investigation.
• Deactivate logging, debugging, and tracing. GoingLive Analysis sessions provide
recommendations regarding log levels.
• If logging cannot be disabled due to customer requirements, clear the most important log
files, reduce log levels (if allowed) and make sure that the disks are not nearly full before your
tests
• Carry out a configuration check to detect differences between individual nodes.

©2004 SAP AG 43
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

6 Script Recording
Script creation heavily depends on the tool you will be using for generating load. There are, however,
a series of guidelines you should follow.

6.1 Browser Settings


Browser settings during recording should match those used by clients during productive operation.
Pay particular attention to the following points:
• HTTP 1.1 over proxy (if applicable) should be activated to profit from the standard features
such as compression.
• Empty the browser cache and clear cookies before recording.
• Check the cache settings and simulate these settings during your load tests to emulate
realistic browser behavior.
• Deactivate proxies unless required for Portal access.
• Check if you set the desired/correct browser language. If everybody is using Italian, you
should also do so. Content checks may differ depending on the language.
• Check for the correct browser version. You should use the same version for recording the
scripts as is used by the customer’s end users.

6.2 Protocol
Some load testing tools offer a range of protocols you can use during recording. You are
recommended to use the generic HTTP/HTML protocol for recording SAP Enterprise Portal scripts. If
the enterprise portal under investigation is using HTTPS, also record your scripts with HTTPS.

6.3 Recording Log


Unless the load test tool of your choice is creating a full recording log or trace by itself (e.g. as a
capture), activate this feature to track down recording problems or issues during your initial replay
tests.

6.4 Headers
Make sure you record headers that might affect your application behavior. “Accept-Encoding” and
“Accept-Language” are required for SAP Enterprise Portal. “Accept-Language” reflects the selected
browser languages. “Accept-Encoding” signals to the server that the client accepts content-encoding
(compression).
In some cases, custom applications require the recording of additional headers.

©2004 SAP AG 44
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

6.5 Keep-Alive
Make sure your load testing is sensitive to Keep-Alive connection timeout handling. If the load test
tool provides monitoring of browser-EP/backend connection handling, verify that connections are
properly re-used and not re-opened during Keep-alive timeout periods. Note that Keep-alive is a
configuration setting of EP/J2EE and most other Web servers.

6.6 Script Structure


The script structure should have the following pattern:

Timer_Start

Requests triggered by one user interaction
Content check

Timer_End
Think Time
Timer_Start

Requests triggered by one user interaction
Content check

Timer_End
Think Time

You can record think times and alter them later or ignore them during recording and add them later if
your load test tool permits this. Unless requested otherwise by the customer’s scenario, use a
standardized think time throughout the whole script.
Follow the following rules for efficient script design:
• Frame each user action with a transaction/timer/checkpoint
• Place wait times outside transactions/timers/checkpoints
• Create content checks for each transaction/timer/checkpoint. Scripts without content checks
are unacceptable.
• Use a test user account with the right role during recording
• Insert comments into the script to enhance readability
Replay and debug scripts after recording. Most load test tools offer log, output or report commands
for debugging or double-checking parameters (print virtual IP, J2EE node to see if load balancing is
working, user data etc.)

©2004 SAP AG 45
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Many scripts require parameterization to make them more flexible (e.g. to use different logons,
search terms, etc.). If applications that are accessed by the script make use of dynamic IDs (e.g.
session IDs, cache IDs, or cookies) or URLs that change during each execution of the script,
parameterization is mandatory to obtain a functioning script. Each script has to be scanned carefully
to locate the requests that require parametrization. More advanced load test tools offer correlation
mechanisms to automate this process. Manual proofreading of the scripts is always necessary,
however.

6.7 Load Testing Knowledge Management


Scripting and load testing KMC (Knowledge Management and Collaboration) requires handling
hidden POST forms and extensive parameterization, depending on the complexity of your scenario.
Failing to do so will result in ill-designed tests that do more harm than good. Bypassing the internal
cache mechanisms by using bad scripts, for example, will provoke memory leaks that would not
occur during normal operation.
Due to the complexity of the subject, we strongly recommend that you request support in the form of
additional material and documentation or consulting through your SAP contacts.

©2004 SAP AG 46
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

7 Test Execution
This section offers advice on what to look out for once you are ready to execute your load tests. Refer
to previous sections about different test types and how to set them up.

7.1 Before Execution


Check the following points before starting the current test execution:
• Set up and run monitoring from within your load test tool and within SAP Enterprise Portal.
• Restart and warm-up all SAP Enterprise Portal servers before major tests. Warm-up includes
running your current script with a small set of users for about 15 minutes.
• Create a new results folder for your new test. Write all data (monitoring, logs, screenshots,
your notes, etc.) to this folder.
• Is there enough hard disk space to save the results and log files? This is crucial for tests that
run overnight
• Approach the full load in stages (e.g. 1, 5, 50 VUsers, full, overload)
• Prepare and activate IP spoofing if you are testing IP-based load balancing
• Keep an eye on license limitations
• Check for and avoid external interference with your load test. If you cannot avoid the
interference, note it as a limitation:
o Load placed on the network or Portal by other users and applications
o Load on the load generators
o Additional software or applications on the Portal servers (scheduled operating
system tasks, anti-virus scans, etc.)
• If you run tests overnight, make sure that no automated maintenance tasks or batch jobs
interfere with your measurements.

7.2 During Execution


It pays off to watch the test during execution to see where it is going and to detect potential problems.
Getting a feeling for bottlenecks and a clear idea of the errors that are occurring is crucial for a
successful test. Log into the Portal occasionally with a separate browser to verify and follow the test.
Check for errors and see how many requests or actions are failing and why. If your load test tool
supports online monitoring, you can check on the number of failed requests and error details during
runtime. Tests with a high ratio of errors require further investigation. Check the logs of your load test
tool and those of SAP Enterprise Portal, and use the monitoring data you collect within the Java VM
for in-depth analysis.
Most metrics will display an asymptotic behavior when you reach the current limits of the system (e.g.
requests per second, throughput, CPU load). If you cannot push the CPU to 100%, there is a
bottleneck in your system and your system is not scaling properly. Refer to section 8 for monitoring
and analysis of possible bottlenecks by means of OS, SAP Enterprise Portal or J2EE monitoring.

©2004 SAP AG 47
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

7.3 After Execution


It is important to monitor the system’s relaxation after the test is over. This is particularly important if
you are running overload tests. Your enterprise portal might still be running, but you have to make
sure that it recovers completely:
• Does the CPU load return to close to 0% after the test ends?
• Are all threads released and returned to being idle (see Thread Overview within SAP
Enterprise Portal)?
Make sure you collect all additional logs, screenshots, notes, monitoring data, etc., and store them in
a central location. Use this material to perform initial analysis.

©2004 SAP AG 48
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

8 Monitoring, Troubleshooting and Analyzing


Results
The final steps of load-testing are monitoring, troubleshooting and analyzing test results. With
monitoring, you have a monitoring framework with which you collect system information that can be
analyzed during and after the tests. Troubleshooting is necessary if the system does not behave as
you expected or is not performing well. Finally, analyzing the results involves looking for relationships
between different pieces of information that were gathered during monitoring for clues about system
performance.

8.1 Monitoring
There are several points that should be monitored during testing cycles. Most testing tools come with
their own monitoring software that links to system utilities such as Perfmon, SNMP or RSTATD. The
testing tools record the information from these utilities and relate it to the information about the tests;
for example it is possible to relate the user load in a test to the CPU usage on the server. The steps
necessary to monitor various types of systems are given below.

8.1.1 UNIX Monitoring


The general principle for UNIX monitoring is to keep track of high-level metrics, such as CPU usage
and memory usage on the machine. These metrics need to be related to the system resource usage
of the J2EE Engine to get a better idea of how the system can be tuned.

For UNIX monitoring, the process of monitoring the system varies depending on the tool you are
using for monitoring. If you are using the LoadRunner monitoring functionality for UNIX you need to
run the rstatd daemon. The rstatd daemon is a server that returns performance statistics obtained
from the kernel. It is normally started by the inetd daemon.

Other testing tools such as OpenSTA require that SNMP is enabled to monitor UNIX system
resources.

8.1.2 Windows Monitoring

Monitoring Windows systems is more straightforward than monitoring UNIX systems. Perfmon can be
used to collect performance data on the remote Windows systems. To test your perfmon
connections, start perfmon on your laptop and connect to the remote system.
Windows operating system monitoring is based on perfmon and requires an administrator account on
the machine you want to monitor. You must also open NETBIOS ports (137-139 and 445) and have
proper system authentication. If you are monitoring several machines independently of one another,
you must synchronize system clocks (e.g. via net time, ntp or other tools).
Certain counters should be active on all machines involved - this includes the load generating
machines - and recording these counter must be activated.
The minimum set of counters for Windows-based operating systems should include:

©2004 SAP AG 49
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• Processor(_Total)\% Processor Time


• Memory\Available Mbytes
More extensive monitoring would encompass the following counters:
• Memory\Page Faults/sec
• Network Interface\Bytes Total/sec
• Network Interface\Current Bandwidth
• Physical Disk\%Disk Time
• Processor\%Processor time for individual processes (e.g. java.exe)
Note that Memory\Available Mbytes give no insight into memory management within Java VM. For in-
depth monitoring of the J2EE Engine and SAP Enterprise Portal, refer to the section on J2EE
Monitoring.

8.1.3 J2EE Monitoring

The J2EE Engine is the foundation of SAP Enterprise Portal 6.0. Most of your monitoring will
therefore be focused on this component. Monitoring the J2EE Engine is similar to monitoring a
separate physical machine because the Java Virtual Machine on which the J2EE Engine runs
performs its own memory management and can constrain CPU usage.

J2EE Memory

Refer to SAP OSS Note #498179 for step-by-step instructions on how to configure and run the J2EE
monitor.
The monitor server is started before each of the load test runs you perform. If you redirect the output
into HTML files, you can open a menu by clicking index.html in a browser. Alternatively, you can
request an XLS output for easier analysis within Excel.
You can focus on the following counters to get an initial idea on the situation during your test:
• Memory
• Threads
• HTTP requests
• Database pools

©2004 SAP AG 50
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Use this data to track memory consumption within the VM, especially during long-running tests. You
can see the amount of memory allocated for your settings in the start parameters (service.ini or
go.sh/.bat), the maximum amount of memory ever used up to that point in time, and the current
memory usage.
In addition to evaluating the data obtained from garbage collection, you can tell if the memory
consumption is increasing constantly and if memory is released properly during GC. Be careful how
you read this output. It is perfectly normal that memory consumption increases when you add more
load to the system (i.e. during ramp-up). You can only tell if there is a memory leak if the system
stays under constant load for an extended period of time (several hours) and the memory
consumption increases constantly during that time, eventually leading to an OutOfMemory error.
It is also possible to telnet into the J2EE Engine to monitor the JVM. The syntax for logging in is:
Telnet <hostname> <port+8>
Example: telnet localhost 50008
The screenshots below show the interface. The commands “msc” and “lsc” are of most interest
during testing. They allow you to see how memory is being utilized and released.

Verbose GC
The verbose:gc parameter allows the J2EE Engine to record how much memory was in use, how
much was released, and how long it took to do the garbage collection. This information can be very
valuable when troubleshooting memory leaks or performance problems. Some example are shown in
section 4.2.6. You are recommended to implement verbose:gc on all server nodes during the testing
process.
After implementing verbose:gc, your startup script should look like the file below. The sections in red
are related to the implementation of the verbose:gc. The first command enables timplementation and
the subsequent commands redirect the output to the appropriate log files:

Service_0_MainClass=com.inqmy.boot.Start
ServiceCount=1
Service_0_RootDir=D:\SAP_J2EEngine6.20\alone
Service_0_Timeout=10000

©2004 SAP AG 51
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Service_0_JavaPath=D:\jdk1.3.1_09\
Service_0_Name=alone
Service_0_Port=15505
Service_0_Parameters=
Service_0_JavaParameters=-verbose:gc -Dmemory.manager=1536M -Xmx1536M -Xms1536M -
XX:NewSize=512M -XX:MaxNewSize=512M -XX:PermSize=128M -XX:MaxPermSize=128M -
XX:SoftRefLRUPolicyMSPerMB=1000000 -Dredirect.input=true -Dconsole.input.disabled=true -
classpath ".;.\system-lib\boot.jar;.\system-lib\jaas.jar;" -Djava.security.policy=.\java.policy -
Dorg.omg.CORBA.ORBClass=com.inqmy.system.ORBProxy -
Dorg.omg.CORBA.ORBSingletonClass=com.inqmy.system.ORBSingletonProxy -
Djavax.rmi.CORBA.PortableRemoteObjectClass=com.inqmy.system.PortableRemoteObjectProxy -
Djavax.rmi.CORBA.UtilClass=com.inqmy.system.UtilDelegateProxy -Dhtmlb.useUTF8=X -
Dfile.encoding=ISO-8859-1
Service_0_JavaParametersDebug=-Dlog.output=service0.output.log
-Dlog.error=service0.error.log
-Dlog.launcher=service0.launcher.log

Other ways to run Verbose GC:


• As a Windows service: See SAP Note #608533
• From the Go script: See SAP Note #670315

Verbose GC Syntax:
[Full] GC <Memory before GC>-><Memory after GC>(<Total allocated memory after GC>),<Time
needed for GC> secs

Verbose GC sample output:


GC 24314K->22300K(24476K), 0.0068339 secs
GC 24348K->22341K(24604K), 0.0066202 secs
Full GC 24386K->13746K(25216K), 0.3872591 secs

J2EE Thread Monitoring

You can also use the Thread Overview in the Portal (System Administration Æ Monitoring ÆThread
Overview) to see the status during runtime (for example, if you want to determine the right time to
take a thread dump).

©2004 SAP AG 52
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

It is not always the right approach to increase the thread parameters to counter this situation. More
threads create a higher overhead within the VM, and increasing their number can lower the
performance even further. If the problem lies elsewhere (e.g. in the code or in bottlenecks in the
backend), adding more threads will not help.
See the HowTo guides for your SAP Enterprise Portal release in the SAP Service Marketplace for
guidance on thread management tuning.

J2EE HTTP Request Monitoring

This sheet gives an overview of how long it takes the requests to get served (Average time for
Request) and shows any queuing (Waiting requests).

J2EE Database Pool Monitoring

Look forany database pools sapep on all servers. Check if there is a sufficient number of Free
Connections for the whole duration of the test without many requests queuing up for a database
connection (Queued Requests).

©2004 SAP AG 53
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

As for the LDAP connection pool, increasing the pool size is not necessarily the right course of
action. Similar considerations apply here. Extreme cases can lead to a large number of hanging
threads that are visible in the thread dump as the requests queue for a free database connection.

8.1.4 LDAP Monitoring

Log onto the portal as super administrator and navigate to System Administration → System
Configuration → UM Configuration → Direct Editing.
Locate parameter ume.ldap.connection_pool.monitor_level (section LDAP Settings) and set it to 2000
[milliseconds]. The monitor is triggered during the next restart if you supply a value larger than 1000
here.
Activate LDAP pool monitoring as shown in the below slides.

Monitoring data is written to //usr/sap/EP0dir/j2ee/j2ee_<YourInstance>/cluster/server. The log files


follow the naming convention sapum_cpmon_<YourServer>_<YourPort>_<Type>.log.
Scan all of them to see if there were any bottlenecks during test execution:

©2004 SAP AG 54
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Jndi connection pool status for pool time / open connections / used connections / idle connections /
waiting threads
21.02.2004 14:14:39.770: 5 0 5 0
21.02.2004 14:14:44.780: 5 0 5 0
21.02.2004 14:14:49.790: 5 0 5 0
21.02.2004 14:14:54.800: 5 0 5 0

8.1.5 Portal Monitoring

Activate the User Management Performance Monitor (System Administration → Support → Support
Desk → User Management → Performance Monitor of UME Objects)

Check this monitor while the tests are running by pressing Refresh at regular intervals. Watch this
monitor for long-running requests to your LDAP repository (max and av). Long LDAP response times
will increase your login times.
Monitor the Logged On Users via System Administration → Monitoring → Portal → Logged On Users
The overview Logged On Users displays the active user sessions. There should be no duplicates (i.e.
only one session is created per log on). Sessions must disappear eventually when the script logs off
the user.

8.1.6 Network Monitoring

This is a list of standard tools that can be used to analyze network conditions:
• netstat –a / –an – List all open connections. If you see a permanently increasing number of
connections established or in TIME_WAIT state, this may be a problem.
• route print – Display the routing table.
• nslookup host – Display all IP addresses assigned to a host name using the domain name
service (DNS). The output of this command may be different on each machine (even for
multiple consecutive executions on the same machine). In particular, the order of the IP
addresses may change. The mapping obtained from DNS can be overridden by editing
\WinNT\system32\drivers\etc\hosts. The entries from the local hosts file are not used by
nslookup. To check them, use ping.

©2004 SAP AG 55
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

• tracert host – Intermediate hosts contacted (hops) for the complete path of a network
connection.
• ping host – Single packet response time and IP address
• proxycfg – Check the proxy configuration on the Portal Server. This configuration is used if
the Portal Server acts as WWW client, accessing for example a Web site with the iView
Server.
• niping (SAP Note #500235) – Check latency and bandwidth of a network connection
• NetIO – Measure the net throughput of a network via TCP/IP (and NetBIOS on Windows)
using different packet sizes
• Ethereal (http://www.ethereal.com) – Network protocol analyzer
• HTTPTracer (http://www.lazydogutilities.com/) - Detailed log of complete HTTP conversation.
Optionally record complete content of requests / responses. Note: The size calculations are
wrong for some releases.
• HTTPLook (http://www.httpsniffer.com/). No response time, no response size – Do not use
for performance analysis. Good feature: does not require proxy settings, instead seems to
register on a lower layer.
• Achilles (http://www.securiteam.com/tools/6L00R200KA.html, Claims to trace SSL in plain
text)

8.2 Troubleshooting
Troubleshooting during a load test takes on many forms. The above sections includd detailed tools to
help debug network and scripting problems. This section focusses on debugging issues with the
Portal.

8.2.1 Debugging iViews

Use Portal monitoring to obtain an overview of Java iView performance.


• Portal Monitoring → Component Overview
o Sort by gross/net response time
o Components with top response time and high number of requests need a closer look:
Room for optimization?
• Portal Monitoring → Thread Overview
o Can be compared to SM50/SM66 of ABAP stack
o Gives you an impression of „what‘s going on“

If a test indicated that a particular transaction is expensive:


1. Create scripts that “stress” suspicious components individually
Or:
1. Use the results of the transactions to identify pages that take particularly long to load

©2004 SAP AG 56
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

2. On these pages, right-click individual iViews, take the URL from that page, and enter it in a
separate browser window
3. If this iView takes a long time to load, you isolated the problem on the page

8.2.2 Troubleshooting Memory Issues

There has been much research into troubleshooting memory issues with SAP Enterprise Portal.
Generally the problems occur with the way that the JVM handles memory usage. The Notes listed
below are a starting point for troubleshooting memory issues. See also SAP Notes #696410 and
#552522.
There are also Java profiling tools that allow you to look inside the J2EE Engine to determine how
memory is allocated to different objects. SAP provides a tool called Sherlok which can be used for
this purpose. However, other vendors such as Mercury and Wily also offer products that can help
here. For more information about the installation and use of the Sherlok tool, see SAP Note 684907.

8.3 Analysis
Once your tests have been completed, you can analyze the results. Most load-testing tools come with
certain features to help you analyze system information. It is important that you link much of this
information together so that your analysis takes your J2EE Engine, Portal, LDAP, network and test
monitoring into consideration.
Much of the analysis is based on the goals of the test. If the goal of the test is benchmarking, the
analysis should focus on showing the results of the specific tests that were run. This allows for future
tests to be gauged against the results of this test.

8.3.1 System Analysis

System analysis is the most basic analysis done as part of a performance test. You can monitor how
the CPU is being used and verify that you are not running out of system resources, even while a test
is running.
By gathering this information you can determine where problems began occurring in your testing. For
example, problems began after a given period of time when network traffic suddenly spiked. From
this we are able to determine that backup procedures were being run during this time, resulting in
poor system performance.

8.3.2 Test Analysis

You should analyze your test results to ensure that the system is returning the expected results and
make sure that the number of errors is acceptable. The response times for transactions are recorded
in this section of the analysis.
One of the more important values to monitor is the relationship between req/sec (requests per
second) and response time. During your test you should reach a point of diminishing returns; this will
happen when the response time starts to rise but the system can no longer process any more
transactions. This is the first sign that your system is becoming overloaded.
When tuning, this point is critical because it gives you a benchmark to determine whether tuning
efforts have improved system performance. The goal of tuning should be to improve response times
and allow more transactions to be processed by the system.

©2004 SAP AG 57
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

Tool-specific papers will discuss how to monitor this information for each of our recommended tools.

8.3.3 J2EE Analysis

Since SAP Enterprise Portal runs on the J2EE Engine, analysis of the J2EE Engine is especially
important.

8.3.3.1 J2EE Memory Analysis

The below graph was created by storing the logs generated by the verbose:gc parameter in Excel
and graphing the results. The chart below shows a system making regular garbage collections.
Memory is collected correctly. When looking for a memory leak, you should see less and less
memory being released and an upward trend in a graph like this (compare with section 4.2.6):

The saw-tooth lines represent memory before and after garbage collection.
Without memory leak, the behavior resembles that shown below:

©2004 SAP AG 58
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

8.3.3.2 J2EE Log Analysis

Following a test it is important to analyze J2EE system logs to make sure that there is not a high level
of thread dumps or Java exceptions. Customer development teams should address such
programming errors and OSS messages should be created.
It is important to focus on the console logs in the directory:
\\usr\sap\p602\j2ee\j2ee_00\cluster\dispatcher\managers\console_logs
\\usr\sap\p602\j2ee\j2ee_00\cluster\server\managers\console_logs

Other logs can be found at:


\\usr\sap\p602\j2ee\j2ee_00\cluster\dispatcher\services\log\work
\\usr\sap\p602\j2ee\j2ee_00\cluster\server\log

©2004 SAP AG 59
HOW TO PERFORM SAP EP LOAD TESTING PUBLIC

„No part of this publication may be reproduced or transmitted in any form or for any
purpose without the express permission of SAP AG. The information contained herein may
be changed without prior notice.
„Some software products marketed by SAP AG and its distributors contain proprietary
software components of other software vendors.
„Microsoft®, WINDOWS®, NT®, EXCEL®, Word®, PowerPoint® and SQL Server® are
registered trademarks of
Microsoft Corporation.
„IBM®, DB2®, DB2 Universal Database, OS/2®, Parallel Sysplex®, MVS/ESA, AIX®,
S/390®, AS/400®, OS/390®, OS/400®, iSeries, pSeries, xSeries, zSeries, z/OS, AFP,
Intelligent Miner, WebSphere®, Netfinity®, Tivoli®, Informix and Informix® Dynamic
ServerTM are trademarks of IBM Corporation in USA and/or other countries.
„ORACLE® is a registered trademark of ORACLE Corporation.
„UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group.
„Citrix®, the Citrix logo, ICA®, Program Neighborhood®, MetaFrame®, WinFrame®,
VideoFrame®, MultiWin® and other Citrix product names referenced herein are trademarks
of Citrix Systems, Inc.
„HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World
Wide Web Consortium, Massachusetts Institute of Technology.
„JAVA® is a registered trademark of Sun Microsystems, Inc.
„JAVASCRIPT® is a registered trademark of Sun Microsystems, Inc., used under license
for technology invented and implemented by Netscape.
„MarketSet and Enterprise Buyer are jointly owned trademarks of SAP AG and Commerce
One.
„SAP, SAP Logo, R/2, R/3, mySAP, mySAP.com, xApps, xApp, SAP NetWeaver and other
SAP products and services mentioned herein as well as their respective logos are
trademarks or registered trademarks of SAP AG in Germany and in several other countries
all over the world. All other product and service names mentioned are trademarks of their
respective companies.

©2004 SAP AG 60

Anda mungkin juga menyukai