10.2.400
Revision: May 14, 2019 12:17 a.m.
Total pages: 221
sys.ditaval
Performance Tuning Guide Contents
Contents
Introduction............................................................................................................................9
Purpose of this Guide.......................................................................................................................................9
Intended Audience...........................................................................................................................................9
How it is Organized.........................................................................................................................................9
Performance Tuning Resources...........................................................................................11
Hardware Sizing Guide..................................................................................................................................11
Performance and Diagnostic Tool...................................................................................................................11
VMWare Best Practices Guides.......................................................................................................................12
Common Patterns.................................................................................................................13
AppServer Random Crash..............................................................................................................................13
CPU Spike......................................................................................................................................................13
Frequent User Complaints..............................................................................................................................14
Everything Is Slow..........................................................................................................................................15
Sometimes Everything Is Slow........................................................................................................................15
Specific Program Is Slow................................................................................................................................16
Reboot Corrects Poor Performance................................................................................................................17
Unable to Reproduce by Epicor Technical Support..........................................................................................18
Memory Utilization Is High.............................................................................................................................18
System Freezes or Crashes.............................................................................................................................19
Printing Is Down............................................................................................................................................20
Screen Freezes...............................................................................................................................................20
Standard Metrics...................................................................................................................22
Performance and Diagnostic Tool.......................................................................................24
Installation.....................................................................................................................................................24
Interface Styles...............................................................................................................................................25
Client Trace Analysis......................................................................................................................................26
Activate from User Account....................................................................................................................26
Activate from Client................................................................................................................................28
Dataset Log Options...............................................................................................................................29
Client Trace Settings...............................................................................................................................30
Access Client Logs..................................................................................................................................31
Analyze Results.......................................................................................................................................32
Review Summary....................................................................................................................................35
Export Results.........................................................................................................................................36
Send Client Logs to Epicor......................................................................................................................37
Fields......................................................................................................................................................38
Scenarios................................................................................................................................................39
Configuration Check......................................................................................................................................40
Enter Configuration Check Settings........................................................................................................40
Analyze Configuration (Standard Metric)................................................................................................43
Performance Enhancements.........................................................................................................................181
Application Troubleshooting........................................................................................................................182
Client Cache................................................................................................................................................182
Locate the Client Cache........................................................................................................................183
Memory Cached Programs...........................................................................................................................183
Client Customizations..................................................................................................................................183
Order Entry Performance Tuning..................................................................................................................184
MRP Performance Tuning.............................................................................................................................185
MRP Scheduled Times...........................................................................................................................185
Moving Between Databases..................................................................................................................186
Net Change Mode versus Regenerative Mode.......................................................................................186
Finite Scheduling During MRP...............................................................................................................186
Sort 0 Level MRP Jobs...........................................................................................................................186
Number of MRP Processes and Schedulers............................................................................................187
Planning Time Fence.............................................................................................................................187
Reschedule In and Out..........................................................................................................................187
MRP Load Balancing.............................................................................................................................188
MRP Stops Running..............................................................................................................................188
Calling MRP Using REST........................................................................................................................189
BAQ Performance Tuning.............................................................................................................................189
BAQ Server Settings..............................................................................................................................189
BAQ Best Practices................................................................................................................................191
SQL Syntax Issues..................................................................................................................................192
Baseline Performance Tests...............................................................................................193
Test Setup....................................................................................................................................................193
Start the Test........................................................................................................................................193
Test Procedure.............................................................................................................................................194
Activate the Trace Log..........................................................................................................................194
Open Sales Order Entry Form.......................................................................................................................194
Test Form Performance Time.................................................................................................................195
Verify and Fix Performance Test............................................................................................................196
Customer Retrieval (Standard Metric)...........................................................................................................197
Run Customer Retrieval Test.................................................................................................................197
Sales Order Line Entry (Standard Metric).......................................................................................................199
Test Sales Order Detail Line Performance...............................................................................................199
Purchase Order Entry (Standard Metric)........................................................................................................201
Test Purchase Order Detail Line Performance.........................................................................................201
Support Checklist................................................................................................................203
Eliminate Potential Sources..........................................................................................................................204
Check Customizations..........................................................................................................................205
Disable BPM Directives..........................................................................................................................205
Recompile BPM Directives.....................................................................................................................206
Identify Program..........................................................................................................................................207
Main Details.................................................................................................................................................207
Introduction
The Performance Tuning Guide contains information on how to evaluate the performance of your Epicor ERP
application. You can then determine what may be the cause(s) of poor performance and make changes as needed.
Use this guide to perform diagnostic tests to compare your system performance against Epicor ERP standard
metrics. You can follow these tests for your own evaluation. These tests are also used by Epicor consultants and
technical support, so you can review these results with Epicor and develop a performance improvement plan.
Important This version of the Performance Tuning Guide is for use with Epicor ERP version 10.0 or later.
Intended Audience
The guide is intended for technical consultants, partners, and system administrators. It helps ensure the Epicor
ERP application performs as expected and provides guidance on performance areas that should be addressed
before contacting Epicor consultants or Epicor Technical Support.
Individuals who perform all or some of these tasks will benefit from reviewing the Performance Tuning Guide.
How it is Organized
This guide explains how you can test the performance of your Epicor ERP application.
The following are the main sections of this guide:
• Performance Tuning Resources - Details the performance tuning resources available for use with Epicor
ERP. Review this section to learn more about each primary resource.
• Common Patterns - Describes the common patterns of poor performance. Each pattern is described along
with suggestions for testing and potential solutions.
• Standard Metrics - Epicor has established benchmark metrics for optimal performance. This section of the
guide contains a table that describes each benchmark metric and the tests you run to evaluate it.
• Performance and Diagnostic Tool - Documents how you install and use the Performance and Diagnostic
Tool, the key program you will use to evaluate the performance of client and server installations. This section
documents how you activate the client logs and server logs for the Epicor ERP application and then view them
within the Performance and Diagnostic Tool. It also describes the features, sheets, and fields available within
this tool.
• System Tuning - Contains some network and system tips that may help improve the performance of your
application.
®
• Microsoft Tools - Details the performance testing tools available from Microsoft .
• SQL Server Trace Flags - Describes some custom traces you can activate in the server logs. You can then
use this additional logging information to review system performance.
• Customize Logs - Review this section to learn how you can customize the Epicor ERP server log and client
log to display more targeted performance information.
• Specific Tuning Options - These sections explore how you can test for locking and blocking, deadlocks,
memory leaks, and application crashes. Review the documentation in these sections to resolve specific issues.
• Application Tuning - This section contains some application tips and techniques you can do inside the Epicor
ERP application. You should only use the tips and techniques that match your use of the Epicor ERP application.
• Baseline Performance Tests - Provides detailed instructions for performing primary tests to measure how
well your installed system performs. Epicor recommends you run these tests after the application is setup and
configured to verify the Epicor ERP application has optimal performance. You can also run these tests periodically
later on to make sure the application performance has not degraded.
• Support Checklist - Contains a recommended series of tasks you can follow to improve how quickly Epicor
Technical Support can resolve your issue. Before you contact support, go through this checklist to first eliminate
potential causes for the issue and then if not resolved, gather system information to include in your support
call.
Epicor representatives and customers can leverage some key resources to improve the efficiency of the Epicor
application and the systems on which it runs.
The Hardware Sizing Guide provides a practical approach to estimating the capacity you need for both the
Epicor ERP application and the database server.
The Hardware Sizing Guide matches the number of users against the database activity that occurs each day. It
next matches these two usage variables against a list of potential hardware configurations. You can then review
the various hardware and network items you could implement to improve application performance.
This guide also estimates the future load that may be required later as well. So even if the current network and
hardware configuration is adequate, you can still use this document to determine whether it makes sense to
upgrade the system now to prevent performance issues before they occur.
This document is intended as a tool to help identify potential system upgrades. Before making any significant
changes, Epicor representatives, network consultants, and customers should work together to determine the
best outcome. Epicor representatives and network consultants may also have further recommendations specific
to a customer organization that cannot be documented in this guide.
The Hardware Sizing Guide is available from EPICWeb on the Download Management Portal. This guide is found
under the Utilities folder:
• Hardware Sizing Guide (link to EPICWeb)
Use the Performance and Diagnostic Tool to evaluate how the Epicor ERP application interacts with the system.
This important tool can identify changes that can potentially achieve significant performance results.
You run the Performance and Diagnostic Tool to analyze Epicor logs. These logs help measure performance on
both the client and server installations of the Epicor application. You use this tool to evaluate the following system
areas:
• Client Performance
• Server Performance
• Network Execution Times
• Server Execution Times
• Client Configurations
• Live Memory Tests
• Capture Server Logs
The Performance and Diagnostic Tool is included with your Epicor ERP application. Review the Performance and
Diagnostic Tool section later in this guide for installation and configuration information.
Virtual environments provide flexibility and can scale up or down more easily than physical environments. Through
this design approach, you can clone machines and add them to the load balancing pool to accommodate changes
in usage patterns.
If you are implementing VMWare in your environment, be sure to review the performance tuning documents
released by VMWare. These .pdf guides contain the information you need to improve the performance of your
virtual environment.
• http://www.vmware.com/files/pdf/solutions/SQL_Server_on_VMware-Best_Practices_Guide.pdf
• http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf
Common Patterns
This section describes some common patterns for poor performance and the causes for these patterns.
If you are experiencing any of these common patterns, be sure to use the Performance and Diagnostic Tool as
needed to analyze your issues. This tool is discussed later in this guide.
Your system has an application server crash for no apparent reason. These crashes happen sporadically without
a specific pattern.
This situation occurs because of a .NET framework issue identified by Microsoft.
How to Evaluate
You can test for this issue through Crash Diagnostics. To do this, generate a memory dump file. You can then
review the data within the memory dump file to verify the conditions and review the details of the crash.
You activate the memory dump file by adding registry keys for the LocalDumps directory. You set these values
in the Registry Editor program. You then have a memory dump file you can review.
If you determine this issue is the cause of the random application server crashes, Microsoft has released a hot fix
to correct this problem. Be sure you apply this update to your production environment. You can download this
hot fix from this web page:
• https://support.microsoft.com/en-us/kb/3139555
Testing Details
For more information on how to set up and generate a memory dump, review this section in this guide:
• Crash Diagnostics
• Crash Diagnostics
CPU Spike
Your computer processing unit (CPU) frequently uses over 90% of the available system resources. Your application
slows down or even stops while this CPU spike occurs.
How to Evaluate
You first need to launch the Task Manager. This key Microsoft tool will help you determine the program that
is causing the CPU spike. Within the Task Manager, you need to add a couple columns to the display; you do
this by accessing the Select Process Page Columns window and then selecting the CPU Usage and CPU Time
check boxes. Verify the spike occurs because of the w3wp.exe process; this program runs your Epicor application.
Then use the Performance and Diagnostic Tool to capture the next CPU spike. You do this by running a Live
Memory Inspection; be sure to use the Stack Trace option. Likewise you should also generate a memory dump
file, as this data will contain more details about the CPU spike. By recording the stack trace and generating the
memory dump file each time you experience a CPU spike, you will have the information Epicor Technical Support
needs to analyze the cause of the issue.
Testing Details
For more information on how to set up a stack trace and generate a memory dump, review these sections in this
guide:
• Memory Test
• Memory Leaks
• Memory Test
• Memory Leaks
Some users frequently complain about a performance problem. They experience poor performance regularly, but
other users at this organization do not experience the same issues.
How to Evaluate
You can turn on the client (UI Trace) log for the whole day on the end-user client machine that has the performance
problem. You will most likely need to do this for several days to capture the required data. Be sure to also clear
the client log file each day so you can better evaluate the results. Note if your client log file expands too quickly,
try to save it every two hours and start a new log.
When the end user notifies you that the Epicor ERP application had poor performance during the current day,
you will have the trace data you need. Save the client log for that day. If you needed to save multiple client logs
for the same day, review all the client logs. Make sure you get the name of the program(s) impacted by the slow
performance.
If you are working with Epicor Technical Support, send the client log (or multiple client logs) to the consultant
or technical support representative helping you with the issue.
Testing Details
For information on how to run the client log and evaluate the results, review these sections in this guide:
• Client Trace Analysis
• Customize Logs
• Client Trace Analysis
• Customize Logs
Everything Is Slow
How to Evaluate
You will need to contact Epicor Technical Support or an Epicor consultant for help evaluating this issue. Slow
system performance across an entire organization can be caused by a number of sources, so be sure to Epicor
helps you determine the reason for the slow performance of your overall system.
To begin evaluating this issue, run client (UI Trace) logs that both identifies the affected programs and traces
application performance. Then gather system information by capturing logs through the Performance and
Diagnostic Tool. You automatically place these logs in the same folder, so you can retrieve them quickly for
review.
You next should evaluate system performance by running a series of Standard Metrics tests. Epicor has determined
a series of minimum performance levels for the Epicor ERP application. By running these tests, you can discover
whether your system passes these metrics. You can then send the logs and testing results to your Epicor
representative.
Epicor may also want you to use the Hardware Sizing Guide to determine the recommended hardware; this
guide evaluates data transaction traffic, number of users, and the system software to evaluate the recommended
hardware.
Testing Details
For information on how to run client logs and test standard metrics, review these sections in this guide.
• Client Trace Analysis
• Capture Logs
• Standard Metrics
• Client Trace Analysis
• Capture Logs
• Standard Metrics
You download the Hardware Sizing Guide from EPICWeb; note you will need to log in with your EPICWeb
account:
• Hardware Sizing Guide (link to EPICWeb)
Sometimes everything slows down, but other times the performance is fine.
How to Evaluate
This performance pattern could be caused by a number of reasons. Most likely, the slow performance occurs
because the Epicor ERP application was experiencing heavy data loading at that specific time. The other likely
possibility is that either a third party ODBC connection locked a record or the Epicor ERP application has some
other locking issue.
To begin evaluating this issue, run client (UI Trace) logs to identify the affected programs and trace application
performance. Make sure you get the name of the program(s) impacted by the slow performance. You should
also evaluate your system use of live memory and check for locking and blocking issues. Then gather system
information by capturing logs through the Performance and Diagnostic Tool. You automatically place these logs
in the same folder, so you can retrieve them easily.
Once you have pulled together this information, you will be able to discover what is causing the slow performance.
Testing Details
For information on how to run client, server logs, memory tests, and locking and blocking tests, review these
sections in this guide:
• Client Trace Analysis
• Memory Test
• Capture Logs
• Locking and Blocking
• Client Trace Analysis
• Memory Test
• Capture Logs
• Locking and Blocking
How to Evaluate
The causes for this pattern are described in the previous Frequent User Complaints section. You will need to turn
on the client (UI Trace) log and have the end user follow the interface or data entry pattern again to recreate the
poor performance. You then have a client log or multiple client logs to review. Place these logs in a central folder
so they are easy to review and pass along to Epicor Technical Support.
Testing Details
For information on how to run client (UI Trace) logs, review these sections in this guide:
• Client Trace Analysis
• Customize Logs
• Capture Logs
• Client Trace Analysis
• Customize Logs
• Capture Logs
How to Evaluate
To evaluate this issue, first activate the client (UI Trace) logs to identify which programs are running when the
slowdown happens. Then when the slow down occurs, capture the current logs before you reboot the system.
You should also run some tests to check for memory leaks, deadlocks, and locking and blocking.
Testing Details
To learn more, review these sections in this guide:
• Client Trace Analysis
• Server Diagnostics
• Locking and Blocking
• Deadlocks
• Memory Leaks
• Client Trace Analysis
• Server Diagnostics
• Locking and Blocking
• Deadlocks
• Memory Leaks
Performance is slow on your server, but the Epicor test systems cannot reproduce the issue.
How to Evalulate
If Epicor Technical Support is unable to recreate the performance issue, consider the following:
• Evaluate the differences in hardware. Your hardware sizing needs to be evaluated by you and an Epicor
consultant or Epicor Technical Support.
• Different data items are used for testing, and so the performance test is not comparing similar conditions.
• The poor performance occurred when the system was experiencing heavy data load, so the performance
issues were caused by this load.
You use the Performance and Diagnostic Tool to gather system information you can then send to Epicor Technical
Support. To do this, you should evaluate the configuration of your application servers, capture server logs, and
evaluate your system. You should also evaluate your system using the Hardware Sizing Guide. Be sure to gather
the name of the program(s) impacted by the slow performance.
Testing Details
To download the Hardware Sizing Guide (requires EPICWeb account) and gather system information:
• Hardware Sizing Guide (link to EPICWeb)
• Configuration Check
• System Evaluation
• Capture Logs
• Hardware Sizing Guide (link to EPICWeb)
• Configuration Check
• System Evaluation
• Capture Logs
Frequently the CPU displays high levels of activity and the application server requires large amounts of active
memory.
How to Evaluate
This performance pattern is caused by one or multiple processes using unusual amounts of active memory. Some
processes that run on a reporting application server/task agent server can consume RAM while they process data.
While this is normal, a memory leak occurs when the process doesn't release this memory after it finishes running.
Note that the .NET/CLR garbage collector determines when to release this memory, so let some time elapse (>
30 minutes) before you determine whether your system is actually experiencing a memory leak.
For an application server or reporting application running through the w3wp.exe, any levels above 10 gigabytes
(GB) is high and can cause an application server to become inactive. Likewise the top limit on application servers
and reporting application servers is 20 gigabytes. If an application server reaches this limit, the Epicor ERP
application will most likely freeze until the memory utilization goes down.
Be sure to gather the name of the program(s) impacted by the slow performance. You can do this by generating
both client and server logs.
To identify what is causing high memory use, launch the Performance and Diagnostic Tool and run a Live Memory
Inspection. You can run a Stack Trace to determine what method(s) cause the application server to become
unresponsive. Likewise you can run a Memory Trace to determine the reason why your system is experiencing
high memory utilization. If you still cannot identify why the system requires so much memory, you can also use
the Performance and Diagnostic Tool to generate a Memory Dump file. Memory dump files contain specific
details that can help you and Epicor Technical Support discover the cause of the performance issues.
Testing Details
To test this performance issue, review these sections of the guide to learn more:
• Client Trace Analysis
• Server Diagnostics
• Live Memory Inspection
• Memory Leaks
• Client Trace Analysis
• Server Diagnostics
• Live Memory Inspection
• Memory Leaks
You system frequently freezes or completely shuts down. When the system shuts down, the w3wp.exe file
crashes.
How to Evaluate
The source for this performance issue may be caused within the Epicor ERP application itself or the overall system.
Be sure you also capture the logs generated by the Event Viewer. The Event Viewer is an administrative tool
installed with your Windows operating system. Use this tool to review exceptions that occur outside of the Epicor
ERP application.
You then need to track the live memory used by your system and test the system for memory leaks. You should
also evaluate the crash by creating a memory dump file; you generate this file by activating a Crash Diagnostics
feature available from Microsoft.
Testing Details
Review these sections of the guide to learn more:
• Event Viewer
• Live Memory Inspection
• Memory Leaks
• Crash Diagnostics
• Event Viewer
• Live Memory Inspection
• Memory Leaks
• Crash Diagnostics
Printing Is Down
When users attempt to print, it frequently takes a long time to finish. In some cases, users receive error messages
that the system failed to print out the document.
How to Evaluate
This situation occurs when the task agent(s) are unable to handle all the tasks assigned to it. You can try resolving
this issue by starting multiple task agents. These multiple task agents provide redundancy in your system, so
another task agent can process a print task when the original task agent is too busy.
If this solution doesn't resolve the performance issue, capture the logs generated by the Event Viewer. The
Event Viewer is an administrative tool installed with your Windows operating system. Use this tool to review
exceptions that occur outside of the Epicor ERP application.
Testing Details
Review these sections to learn more information:
• Event Viewer
• Capture Logs
• Event Viewer
• Capture Logs
Screen Freezes
The current Epicor ERP program (form) stops working. In the Classic Menu, the program becomes grayed out
and unresponsive; in the Modern Shell Menu, users can click on other items but anything involving the frozen
program does not run.
How to Evaluate
Each client only runs a single call (data transaction) at a time with the server. If you have a business activity query
(BAQ) or a tracker call that is taking a long time to process on the server, the client cannot run another server
call until this previous call finishes.
To evaluate this issue, run both client (UI Trace) and server logs. You should also gather system logs by activating
the log capture feature. Through this feature, you place system logs in the same folder.
Testing Details
For information on how to run client, server logs, memory tests, and locking and blocking tests, review these
sections in this guide:
• Client Trace Analysis
• Server Diagnostics
• Capture Logs
• Client Trace Analysis
• Server Diagnostics
• Capture Logs
Standard Metrics
Epicor has developed a Standard Metrics table that identifies the benchmarks for optimal performance. Compare
these benchmarks against the customer system to locate areas of poor performance.
These performance benchmarks can identify key areas of improvement. Once these performance values are
determined, customers and Epicor representatives can investigate potential ways these performance values can
be tuned to achieve better results. This section first contains the Standard Metrics table and then a second example
table that illustrates a specific company's metrics compared against these standard metrics.
CPU Speed Tests the general speed of the application CPU speed <= 400 milliseconds
server.
(Config Check)
GetRowsKeepIdleTime An overall system performance test. How fast 15 milliseconds average, low
does this common, static API method run over performance variability throughout
(GetRowsKeepIdleTime
this infrastructure? This value can be compared the day; should always be less than
Chart)
across many machines. 30 milliseconds
Network Test Impact of the network on the client Server (Blue) < 0.5 seconds,
experience. Network (Green) < 0.4 seconds
(Network Diagnostics)
Configuration Check Reviews the performance of the Epicor ERP The Epicor application server passes
application server. the configuration tests
(Config Check)
Customer Retrieval Test This test measures database retrieval time by Observed time to move through
selecting and paging through customers. customers < 1 Second
(Baseline Test)
Sales Order Test This tests the performance of Sales Order Add 20 lines Sales Order < 36
Entry, a commonly used, important business seconds
(Baseline Test)
function. This test does not check
customizations or BPMs.
Purchase Order Test This tests the performance of Purchase Order Add 20 lines Purchase Order < 26
Entry, a commonly used, important business seconds
(Baseline Test)
function. This test does not check
customizations or BPMs.
The following sections describe how you test for these standard metrics.
* The bold values indicate a performance metric value that exceeds the value of an Epicor standard metric.
Use the Performance and Diagnostic Tool to evaluate how the Epicor application performs through client, server,
and network tests. You also use this tool to evaluate the system configuration and download additional diagnostic
resources for use with SQL Profiler.
If you are experiencing performance issues, you should first contact either your Epicor consultant or Epicor
Technical Support. If the performance issue cannot be resolved through this initial contact, the technical support
representative or the consultant may recommend you use the Performance and Diagnostic Tool. This tool captures
performance information, and you can organize this information to receive meaningful metrics that relate to the
® ®
performance of your Epicor ERP application. You can also export these results to Microsoft Excel for additional
review and analysis.
Through the Performance and Diagnostic Tool, you can evaluate:
• The performance of one client versus another client on the same system.
• The performance of business object methods on both the client and the server.
• Overall performance of the server and the network.
• Performance of business objects in one system against the same business objects on other systems.
• Performance of customizations, personalizations, Business Process Management (BPM) methods, and business
activity querys (BAQs).
• The configuration of the Epicor ERP application.
Important The Performance and Diagnostic Tool released with Epicor ERP version 10 is only compatible
with the 10.0.600 version or higher. If you need to evaluate the performance of Epicor ERP 9.05 or earlier,
download the Performance and Diagnostic Tool released with the 9.05 version.
Installation
3. From the tree view, expand the Server Management node and the <YourServerName> node.
9. Click Close.
10. You can now launch the Performance and Diagnostic Tool. Depending on your operating system, you launch
this tool in different ways:
a. If you are on Windows SQL Server 2008 R2 or Windows 7, click Start > All Programs > Epicor Software
> Epicor Administrative Tools > Epicor Performance and Diagnostic Tool.
b. If you are on Windows SQL Server 2012 or Windows 8, press the <Windows> + F button to display the
Charms bar; from the Apps screen, select Epicor Performance and Diagnostic Tool.
Interface Styles
The Performance and Diagnostic Tool displays in a default interface style. You can change the interface by loading
in an existing style or creating a new style.
The fonts and colors that display on the Performance and Diagnostic Tool are customizable, so you can use the
styling features to design an interface look and feel you prefer.
To access the styling feature set:
2. The interface is now in styling mode. If you hover your mouse over different sections of the interface, a
dialog box displays with a series of shortcut keys for each item on the interface.
4. Use this window to modify the colors, borders, states, and so on for the selected item. You can make some
changes to the item and then preview the change.
Tip For complete information on how to use the styling features, review the Styling chapter in the
Customization User Guide.
6. When you finish creating your interface style, return to the Runtime Stylist and click Save.
Your style is now an available option for the interface.
7. To select this or another style, click Options > Styles; a list of available styles displays.
The Performance and Diagnostic Tool saves this selected style. The next time you launch this tool, it uses the last
style you selected from the Options menu.
You analyze the performance of client installations through client (UI Trace) logs. This section of the guide describes
how you set up these logs and analyze their results in the Performance and Diagnostic Tool.
Important To begin evaluating performance, you should always turn on both the client and server tracing
logs. These logs will help your team, Epicor consulting and support professionals, and network professionals
determine the cause of the performance issue(s). By having a series of tracing logs available, both you and
Epicor specialists can more throughly evaluate and resolved your performance issue(s).
You can set up the client (UI Trace) log to automatically activate each time users log in through their user accounts.
Run these logs for as long as you need; when they have gathered enough information, you can then deactivate
them.
You set up this feature through User Account Security Maintenance.
2. On the Detail sheet, click the User ID... button to find and select the user account you wish to update.
5. Select the Write Full DataSet check box if you want to record the entire dataset content (f any) that passes
between the client and the server. Each time a method sends data, it now appears in the client log with the
method.
6. Select the Track Changes Only check box if you only want changes to the dataset recorded within the
tracing log. All changes to columns in the dataset are then stored within the log.
7. Activate the Include Server Trace check box when you want to track the client's interaction with the server.
This creates a <serverTrace> node within trace packets (<tracePacket>) in the client tracing log. This option
is useful if you want to diagnose how client activity affects the application server.
Tip You can add server profiles and traces to the client log. When you select the Include Server Trace
check box, the client log captures these additional options. To add these profiles and traces to the
client log, update the .sysconfig file that launches the client installation. You can also customize what
the tracing log tracks by creating a client configuration file that contains additional tracing options
and logging levels. These custom options are used when you activate the client tracing log.
For more information, review the Custom Trace Logs section later in this guide.
8. Use the Write Call Context Dataset check box to include Business Process Management (BPM) table values
on the trace log. This information provides the data context for a call each time a call is sent between the
client and the server. This information is useful for developing BPM method directives, as you can intercept
these calls to run additional processing that verifies data and other custom functions.
9. Numerous method calls occur where the data is passed down, modified, not written to the database, and
then returned to the client. Select the Write Response Data option to include these database transactions
on the trace log.
10. Now select the Log Directory Scheme option for the default log directory. The option you select defines
the directory path scheme for this client account.
Available options:
• %appdata%\epicor\log\
• %temp%\epicor\log\
• %localappdata%\epicor\log\
• Default from Epicor.exe.config file -- Select this option to use the path defined in the Epicor.exe.config
file; this config file is located in the Client directory for each Epicor ERP installation. You enter the directory
path and folder you want in the UITraceFileDefaultDirectory setting.
To learn how to set up this feature, review the Auto Capture Client Logs topic later in this guide. This
topic is found under the Log Capture section.
Tip Users can override this default path on each client. When they display the Tracing Options
Form on the client, they can enter a different path in the Current Log File field. The log files then
generate in this folder and the custom directory path becomes the default location for this client.
However if the client can no longer find this location, the default path specified in the
epicor.exe.config file is used instead; this .config file is available in the Client directory. If the
client cannot find this directory path location, the client then writes the client logs to the default
%appdata%\epicor\log location; for example:
C:\Users\<ClientUserName>\AppData\Roaming\epicor\log
Notice after you select a scheme option, the Current Log Directory field displays the default directory path
and folder that gathers the client logs for this user account.
Tip Users can override this default path on each client. When they display the Tracing Options Form
on the client, they can enter a different path in the Current Log File field. The log files then generate
in this folder and this custom directory path becomes the default Current Log Directory for this client.
However if the client can no longer find this location, the default path specified in the
epicor.exe.config file is used instead. If the client cannot find this directory path location, the client
then writes the client logs to the default %appdata%\epicor\log location; for example:
C:\Users\<ClientUserName>\AppData\Roaming\epicor\log
The next time a user launches the Epicor ERP application with this account, the client log automatically generates
using your selected Dataset Options. It generates either in the default file location specified on the user account
or a unique directory entered by the user on the client through the Tracing Options Form.
A new log file is created each time the user logs into the application with this user account. If the user logs into
multiple computers through the same user account, a new log generates for each client instance. When you
have gathered enough information, access the user account and de-activate the client tracing log.
You can also manually generate the client (UI Trace) logs by activating the trace log in the Epicor ERP application.
To generate a client trace log:
2. Depending on the interface style, you launch the tracing log window in the following ways:
a. When you run the application using the Classic Menu interface, you activate the trace log from the
Main Menu. Click Options > Tracing Options.
b. When you run the application using the Modern Home Page interface, you can activate the trace log
by clicking the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options
button. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu.
Click the Settings tile and the General Options setting. Select Tracing Options.
c. When you run the application using the Active Home Page interface, you can activate the trace log by
clicking the Utilities icon in the top right corner of the window. From the utilities list, select Tracing
Options. Alternately, in the Active Home Page interface, you can activate the tracing log from the Home
menu. Click the Settings icon and the General Options setting. Select Tracing Options.
Regardless of which method you use, the Tracing Options Form window displays.
4. Select the Write Full DataSet check box if you want to record the entire dataset content (f any) that passes
between the client and the server. Each time a method sends data, it now appears in the client log with the
method.
5. Select the Track Changes Only check box if you only want changes to the dataset recorded within the
tracing log. All changes to columns in the dataset are then stored within the log.
6. Use the Write Call Context Dataset check box to include Business Process Management (BPM) table values
on the trace log. This information provides the data context for a call each time a call is sent between the
client and the server. This information is useful for developing BPM method directives, as you can intercept
these calls to run additional processing that verifies data and other custom functions.
7. Numerous method calls occur where the data is passed down, modified, not written to the database, and
then returned to the client. Select the Write Response Data option to include these database transactions
on the trace log.
8. Activate the Include Server Trace check box when you want to track the client's interaction with the server.
This creates a <serverTrace> node within trace packets (<tracePacket>) in the client tracing log. Use the
database activity gathered in this section to review how the client installation may be affecting the
performance of the server. When you select the Include Server Trace check box, server trace options
become enabled:
a. If you select the Trigger Hits check box, when a record is sent to the database to be added, updated,
or deleted (Write/Update/Delete), the framework creates an event in which SQL Server intercepts the call
and performs table specific logic. After this event is processed, the record is sent to the database. Select
this check box to record these trigger events in the server log.
b. Activate the ERP DB Hits check box to track how the Epicor ERP application interacts with the database.
You can review each database hit as well as how long it took each hit to complete.
c. Select the BPM Logging check box to record Business Process Management (BPM) method calls. Each
time user activity activates a BPM directive, the application server log records the business object method
that was called and how long this call took to complete. This option is production friendly.
d. Select the BAQ Logging check box to record Business Activity Query (BAQ) database calls. Each time
user activity activates a BAQ, the application server log records which query was called and how long it
took this BAQ to gather the data results. This option is production friendly.
e. Use the Other Flags (comma delimited list) field if you want to include additional traces in the log.
You can review the available client and server trace options in the Customize Logs chapter of Performance
Tuning Guide. Note that when you enter multiple trace options, you should delimit them using commas.
Tip You can add server profiles and traces to the client log using the .sysconfig file. When you select
the Include Server Trace check box, the client log captures these additional options. To add these
profiles and traces to the client log, update the .sysconfig file that launches the client installation. You
can also customize what the tracing log tracks by creating a client configuration file that contains
additional tracing options and logging levels. These custom options are used when you activate the
client tracing log.
For more information, review the Custom Trace Logs section later in this guide.
9. Click OK.
10. Now return to the Home screen and launch some programs whose performance you want to measure. Add
new records, modify records, run processes, and so on.
The Trace Log records gathers performance data about each action you take within the Epicor application.
11. Return to the Home window; click the Settings button and select Tracing Options.
The Tracing Options window displays.
13. Record the directory from the Current Log File field.
You will use this directory path later when you analyze the client logs in the Performance and Diagnostic
Tool.
The Tracing Options Form has a number of Dataset Options you can select to define what data is tracked by the
client tracing log. These check boxes all display in the Dataset Options section on the form.
For more information, review the Customize Logs section later in this guide.
You can set up the Performance and Diagnostic Tool to indicate when client trace log entries exceed specific
thresholds. When a row in the client log is equal to or greater than a threshold level you define, the row highlights
in red.
You define these threshold levels on the Options window in the Performance and Diagnostic Tool:
3. For the Server Execution Time, enter the starting value for method calls that originate from the server.
Any server calls that take this millisecond value or higher will display in red on the Results grid. The default
value is 500 milliseconds.
4. Next for the Client Execution Time, enter the starting value for method calls that originate from the client.
Any client calls that take this millisecond value or higher will display in red on the Results grid. The default
value is 500 milliseconds.
5. Lastly for the Network Transfer time, enter the starting value for the time it takes a call to travel between
the client to the server or the server to the client. Any transactions that equal this millisecond value or higher
display in red on the Results grid. The default value is 500 milliseconds.
6. If you select the Auto Save Grid Layouts check box, the Performance and Diagnostic Tool will save the
current sequence of the grid columns.
Now after you select the client logs you want to review and generate the results, the Performance and Diagnostic
Tool highlights rows which exceed these thresholds.
You next pull the client (UI Trace) logs into the Performance and Diagnostic Tool.
You defined where the client logs were saved in the Current Log File field on the Tracing Options Form
window. Be sure you know this directory location before you do the following steps:
a. Launch Windows Explorer and navigate to the path that displayed in the Current Log File field. Click
and drag the client log icon from Windows Explorer to the Add Files grid in the Performance and
Diagnostic Tool window.
b. Click the Add Files... button and browse to the location of the client log file. You can also click the
Down Arrow next to the Add Files button to select client log files from multiple locations.
The selected client log file displays on the Add Files grid. Continue to add the client log files you need to
evaluate.
4. Optionally, select the Exclude System Calls check box to prevent system method calls from displaying with
diagnostic results.
The Performance and Diagnostic Tool will then not display values from the GetRowsKeepIdleTime business
object method. This method runs in the background while a current form is active, but it does not indicate
how long it takes other methods like Update, GetNew, and GetList to run. These method calls are generated
by user activity.
However if you are testing how the client interacts with the server when no user activity is taking place, you
will want to review the calls from the GetRowsKeepIdleTime method. In this case, do not select this option
so you can displays these calls in the client results.
5. Optionally, select the Exclude System Monitor Calls check box to prevent system monitor calls from
displaying with diagnostic results. This check box filters any methods that called the System Monitor,
specifically, the GetRowsKeepIdleTime method. Like system calls, system monitor calls do not relate to
the calls generated by application data, but you may need to review these calls.
6. Notice you can also change the Log File Date Format to select a date format used in a different locality
or enter your own Custom Format.
You can now analyze the trace log data within the Results and Summary sheets.
Analyze Results
Use the features on the Performance and Diagnostic Tool to review the client results recorded on the trace log.
The Results grid displays the main details gathered from each trace packet.
To analyze client trace log results:
2. You can filter the log results based on values that populate each column. To do this, click on the Funnel
icon that displays on each column header.
A filter window displays. The filter options that display depend on the field type and the column data.
3. By default, you can filter on (All) the records in the column. This indicates that every record in this column
displays.
4. To display records where this column does not have a value, select the (Blanks) option.
5. You can also filter this column on a specific value. To do this, first clear the (All) check box and then select
the records you want to include in the filter. For example, you can clear (de-select) all the check boxes except
the GetByID check box. Then only entries that have the GetByID value display in the results grid.
6. Each filter window also has a <FieldType> Filters sub-menu. This sub-menu contains a series of filter
options you can use to limit the results based on the field type.
For example, if you are filtering on a text field, you have Text Filters options like Equals..., Does Not Equal...,
Begins With..., Ends With..., and so on.
8. Use this window to create custom filter conditions. You filter against the selected column, such as BO.Method.
You then select a expression from the middle drop-down list (such as Contains), and then enter the custom
value in the right drop-down list.
11. When you finish creating the custom filters, click OK.
12. You can maximize the grids by pressing the <F11> key or selecting the Options > FullScreen menu option.
The interface now fills your screen, displaying more columns and information on the Results, Summary
Analysis, Errors and Messages, and the GetRowsKeepIdleTime Chart grids. These grids keep any
options you selected on the default view. To return the Performance and Diagnostic Tool to the default
view, click the Leave full screen mode option on the title bar.
13. To help organize the results, use the Group By feature on the grid. Click and drag a column header (for
example, Object Name) to the Drag a column header here to group by that column area. You can drag
multiple column headers to further structure the results as you need.
Example If you group by using the Business Object column, the grid displays all the business objects
recorded on the client trace in alphabetical order.
14. You can review the following items on each business method call:
a. Start Date - Displays the date and time this business method call began.
b. End Date - Displays the date and time this business method call completed.
c. Type - Indicates the business object that generated the business method call. Typical values in this column
are Erp.BO (application calls), Ice.BO (framework calls), and Ice.Lib (framework resource calls).
d. BO.Method - Displays the name of the method that generated the call. Some example methods include
GetList, ChangeNeedByDate, KitPartStatus, and so on.
e. Execution Time - Indicates how long in milliseconds it took the client to complete the current business
method call.
f. Object Name - Displays the name of the business object that generated the call. Some example objects
include GLJournalEntry, APInvoice, Job Entry, and so on.
g. ServerExecutionTime (ms) - Indicates how long in milliseconds it took the server to complete the
current business method call.
h. NetworkTransportTime (ms) - Indicates how long in milliseconds it took the network to send the
business method call from the client machine to the server machine.
i. Appserver Thread - - Indicates which thread from the application server was used to run the business
method call.
j. FileNumber - Displays the number of the client log file from which the current row generated. The Add
Files grid displays the matching FileNumber column, so you can use this value to open this client log .txt
file.
Tip If any row displays in red, it indicates that the Server Execution Time, the Client Execution Time,
and/or the Network Transfer values are equal to or greater than a threshold value. The Performance
and Diagnostic Tool measures these values in milliseconds. You can modify these threshold values on
the Options window; these fields are available on the Client Trace sheet.
15. To copy the data or display this data in different ways on the grid, right-click the Results grid to display the
context menu.
a. Select the Copy All option to place the rows in the Windows clipboard. You can then paste this data
into a text editor, Microsoft® Word®, and other applications.
b. Use the Copy All Include Labels option to copy both the rows and the column labels. When you paste
this data in another application, the column labels display above the rows.
c. If you just want to copy a few rows, select specific rows by using the <Ctrl> button or select a range of
rows using the <Shift> button. Then right-click the grid and select the Copy Selection option.
d. To copy both the selected rows and the column labels, select the Copy Selection Include Labels option.
e. Select Show Summaries... to display the Epsilon button on each column. Click this button to launch
the Show Summaries window. You use this window to summarize data on the column through the
Average, Maximum, Count and other options.
g. Click Show more details to display all the available information for each call. The Packet Details sheet
displays a title of the trace packet business object and method, as well as the time of the tag.
Review Summary
The Summary grid displays the performance times required for each business object to run.
To analyze the client trace log summary:
2. The Trace Date/Time and File Name fields display the selected trace log as well as the date and time it
was run.
3. In the Report Monitor section, notice the GetRowsKeepIdleTime method. This business object runs in
the background while a current form is active.
Tip If you shut off the System Monitor, the tracing log did not record time against the
GetRowsKeepIdleTime method.
4. The All grid displays the summaries of the method calls recorded on the tracing log.
5. The Calls column indicates how many times the method sent a call to the server.
6. The execution times , in milliseconds, for each call are calculated and display in the accompanying columns.
Additional information displays that identifies the object information:
• Average
• Longest
• Least
• Diagnostics ID
• Type
• Object Name
• Total
Export Results
® ®
You can further analyze the client diagnostic results by exporting this information into Microsoft Excel or
® ®
Microsoft SQL Server Management Studio .
Leverage the tools in these applications to more deeply analyze the performance data. When you export the
results to Microsoft Excel, you can use this application to display this data using tables and graphs. When you
export the results to SQL Server, you place these results into a series of SQL tables and views. You can then use
SQL Server Management Studio to run queries against these results.
1. To save the results in a Microsoft Excel file, click the Export to Excel button.
The Save As window displays.
3. Enter a File name that helps you easily identify the purpose of the .xlsx file.
4. Click Save.
You can now launch the .xlsx spreadsheet and review it in Microsoft Excel.
5. You can also export the client log results into SQL Server Management Studio. Click the Down Arrow on
the Export to Excel button; select the Export to SQL option.
6. The Optional Export Description window displays. By default it selects the first title name.
You can either leave it blank or enter a name. If there are multiple files, the first line title is automatically
selected.
7. Click OK.
9. In the Connection tab, select the SQL Server to which you want to link to the results data.
10. Now enter the User name and Password you use to log into this server.
11. From the drop-down list, select the database that will receive the client diagnostic results.
12. Verify you can export the results to this database by clicking the Test Connection button.
13. A dialog box displays indicating the connection is successful. Click OK.
You return to the Data Link Properties window.
16. Now open SQL Server Management Studio and review the schema created by the PDT, you can view
some of the new tables.
• Export Run - main table which will display the user selected Optional Export Description as well as a
few other related fields, the main of which "Export ID" should be used to differentiate between exports
executions at each of the child tables.
• ExportZipFiles - table which contains a link to the ExportRun table that will contain a zipped version of
the file(s) exported to SQL in binary format as well as the path from which the files were exported from.
• Client_<TableName> - multiple tables per each of the items displayed in the PDT application, you can
now do queries against them and extract the required data that you may need.
Within SQL Server Management Studio, you now display the diagnostic results. You can review these results in
a grid. The results also display in a tables under the Tables node, and a series of Views generate as well. The
diagnostics results include all details of the trace packet tags, such as type of test, data file names, and export
all in one go. All details are stored in memory and released once the data has been saved into SQL.
If you are working with an Epicor consultant or Epicor Technical Support to improve performance, follow these
steps to send client logs and server logs to a support representative.
1. During a period of peak activity on your network, launch the Epicor ERP application on the client workstation.
2. Depending on the interface style, you launch the tracing log window in the following ways:
a. When you run the application using the Classic interface, you activate the trace log from the Main Menu.
Click Options > Tracing Options.
b. When you run the application using the Modern Home Page interface, you can activate the trace log
by clicking the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options
button. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu.
Click the Settings tile and the General Options setting. Select Tracing Options.
c. When you run the application using the Active Home Page interface, you can activate the trace log by
clicking the Utilities icon in the top right corner of the window. From the utilities list, select Tracing
Options. Alternately, in the Active Home Page interface, you can activate the tracing log from the Home
menu. Click the Settings icon and the General Options setting. Select Tracing Options.
Regardless of which method you use, the Tracing Options Form window displays.
4. Now select the Track Changes Only check box. Only update, new, delete, and other method calls that
change the database are included on the trace log.
5. Select the Include Server Trace check box. This option causes calls from the client to the server to be
included in the trace log.
7. Now launch the program that has poor performance, following the interface movement pattern and data
entry pattern that demonstrates the issue.
Tip Typically you should have the end user who reported the issue follow the pattern that causes the
poor performance. The data the user enters is also very important. The performance issue can be
triggered by many factors -- the data the end user enters, the options the end user selects, the
navigation path the end user follows, and so on.
11. Now perform a second test during a period of low activity on your network (after hours or on a weekend).
Launch the Epicor ERP application from the server machine.
The reason you launch the Epicor ERP application during this time and in this environment is to eliminate
load condition and network traffic between the client and the application server. Even if you know the
network is not the cause of the poor performance, you must record these performance times so the Epicor
consultant or Epicor Technical Support can compare the performance under these different conditions.
13. Email the files to your Epicor consultant or Epicor Technical Support.
Fields
This topic documents the fields and sheets available for analyzing the client log files.
Some fields on the interface have a context menu, which is indicated by a triangle in the upper right corner of
the field. To open the context menu, right-click on the field.
Add Files
Click the Add Files button to select one or more server log files for analysis. Depending on what you are evaluating,
multiple server.log files may have been generated. You can then analyze these files together within the Performance
and Diagnostic Tool.
You can also click the Down Arrow next to the Add Files button to select server log files from multiple server
locations. You may need to do this on load balanced systems where the application is located across multiple
servers.
Clear Filters
Click this button to restore default layout for the grid columns and remove the filters you've created. This resets
the Results, Summary Analysis, and Errors and Messages grids to their original base layout. You can now
click the Generate Diagnostics button again and display unfiltered results throughout these grids.
Clear Results
Click the Clear Results button to remove data from the Results and Summary sheets.
You can also continue generating data from different trace logs without clearing the data. The Summary sheet
then contains results from each trace log generation.
Clear Selected
Click the Clear Selected button to remove the log file paths you have currently loaded into the Performance and
Diagnostic Tool.
Custom Format
If the date format you need is not available on the Log File Date Format drop-down list, enter the Language
ID for the language you use in this field. The date format linked to this language loads into the Performance and
Diagnostic Tool.
Export to Excel
® ®
Click the Export to Excel button to export results to Microsoft Excel for further manipulation and analysis. The
exported data is based on which grid is currently visible. For example, if the Results grid is visible, the data on
this grid is exported.
Generate Diagnostics
Click the Generate Diagnostics button to generate the log file results and display these results in the Summary
and Results grids.
Scenarios
This topic contains examples of how you can measure client performance through different methods.
This may provide you with some things to consider with your network, especially when comparing LAN users
against WAN users. However CPU differences will probably not be determined, because the tool is capturing
execution time of the method calls, not the time taken to open forms or render the data within these forms.
Configuration Check
The Performance and Diagnostic Tool contains a utility to check the configuration of the application server. Use
this Config Check option to see what issues and potential issues you may have with the application server
configuration.
This feature checks a number of configuration items, including the CPU Speed and Configuration Check
standard metrics. After the Performance and Diagnostic Tool analyzes the configuration, this feature displays
recommended actions you can follow to fix various issues.
You first need to set up the tool so it connects to the application server for the Epicor instance you wish to test.
4. Use the Connection Method drop-down list to indicate how this application server checks for authentication
certificates through Internet Information Services (IIS). When a user logs into the application, the selected
method checks whether the user can access the Epicor application. Available options:
Tip You can also find the Connection Method in the .sysconfig file. Locate the <EndpointBindi
ng> value to see the method used the application server.
transport, the message which contains the data transfer is encrypted. Because this binding does not use
Hypertext Transfer Protocol Secure (HTTPS), it tends to be slower than bindings which use HTTPS.
Use this method for application servers that handle smart client installations when users reside in different
domains. By using an SSL certificate, users from these different domains can log into the Epicor ERP
application.
• HttpsBinaryUsernameChannel -- Use this option to authenticate transactions using an Epicor Username
and Password. The data transfers between the client and server using Hypertext Transfer Protocol Secure
(HTTPS). HTTPS encrypts the data transfer.
• HttpsBinaryWindowsChannel -- Use this option to authenticate transactions using a Windows Username
and Password. The data transfers between the client and server using Hypertext Transfer Protocol Secure
(HTTPS).
You can select this method for application servers that handle smart client installations and Epicor Web
Access (EWA) installations where users access the application through the same domain. Any user with
a Windows Username and Password within this domain can successfully log into the Epicor application.
You set up this protocol using a Domain User Account. This account can be either a custom account
contained within an application pool (AppPool) or a built-in account that runs through the LocalSystem,
LocalService, or NetworkService. Built-in accounts automatically contain security verification to work
within their selected local or network service. Custom accounts typically have more powerful security,
but they can require more manual set up as well.
• HttpsOffloadBinaryUserNameChannel --This HTTPS protocol binding is a configuration that offloads
encryption handling to an intermediary Application Request Router such as an F5. The binding
authenticates using an Epicor Username and Password token. The data transfers between the client and
server using Hypertext Transfer Protocol Secure (HTTPS). This protocol is configured to move encryption
handling to an intermediary Application Request Router like F5 or a similar router.
5. Enter the User Id and Password for the Epicor user account used to access the application server.
6. Next enter the Client Directory for the folder that contains the client installation for the Epicor application.
You can enter this path directly or click the Browse (...) button to find and select it. Enter the path you
used to locate the .sysconfig file.
7. If you select the UsernameSSLChannel option for the Connection Method, the Dns Identity field activates.
Enter the expected DNS server name in this field. You can find the correct DNS server name from the
.sysconfig file that points to the application server; this value is located in <appSettings><DnsIdenti
ty> setting. In your Epicor ERP installation, .sysconfig files are located in the client\config folder.
8. Enter the Operation Timeout you wish to use; this value defines how long the Performance and Diagnostic
Tool waits until an incomplete operation is aborted by the application server. The default value is 300 seconds.
9. Use the Read configuration from sysconfig file option to select a configuration settings file from your
Epicor instance. Click the Browse (...) button to find and select the sysconfig file you will use to run the
PDT configuration check and network diagnostics. The settings from the selected sysconfig file populate
the Application Setup sheet. If you have a saved password in your file, re-enter the password in a plain
text in the Password field as PDT won't use the password from the sysconfig file.
The Configuration Check and Network Diagnostics now use the options you defined; these settings are used for
both plugins.
The Config Check sheet contains tools for analyzing your application server configuration settings.
2. Click the Check Configuration button. Notice the directory path to the server location displays next to this
button.
The Performance and Diagnostic Tool evaluates the configuration settings and the system. These processes
may take some time to run. When complete, the Config Check Summary sheet displays a list of rules run
against which the application server was checked.
3. You can maximize the grids by clicking the Full screen button.
The interface now fills your screen, displaying more columns and information on the Config Check Summary
and Config Check Details grids. These grids keep any options you selected on the default view. To return
the Performance and Diagnostic Tool to the default view, click the Leave full screen mode option on the
title bar.
4. The Result column displays the generated evaluation of each rule against your application server
configuration. Available results include FAIL, WARNING, EXISTS, INFO, Not Available, and PASS.
5. Depending on the results, different instructional text displays in the Action Required column.
• INFO - Displays some key information you should review to make sure your system is set up correctly.
• EXISTS – Notifies you that various items, like customizations and BPM directives, are active in the current
system. These items should be evaluated for performance.
• PASS – The configuration met or surpassed these rule requirements. No further action is needed.
• WARNING – Alerts you that potential performance issues may occur. Review these items to see if further
changes are needed.
• FAIL – The application server configuration did not meet the rule requirements. The ActionRequired
column displays a recommended action you can do.
• Not Available - Displays any item that was not available to test. You may need to correct some setup
configuration items and re-run the configuration check.
6. The Config Check Details sheet displays the various rule keys run to evaluate each configuration rule.
Expand one of the rules to see the specific calls.
7. Notice you can view the results in a Microsoft Excel file. To do this, click the Export to Excel button. The
information from all the Configuration Check sheets display in the exported spreadsheet.
You run the Live Memory Inspection feature to both review how a .NET process runs during a specific date and
time and analyze memory crashes caused by a .NET process. This feature inspects .NET processes run by the
Epicor.exe, Epicor64.exe (client), and w3wp.exe (app server) executable programs.
Use these tests to determine why sometimes the Epicor ERP temporarily slows down. These tests are also useful
when you are experiencing unusually high CPU activity and high memory utilization. When application servers
require over 10 gigabytes (GB) to run, you should evaluate the reasons why your system needs so much memory.
To begin, you first select the process you want to review. You then decide if you will generate a stack trace, a
memory trace, or both traces:
• Stack Trace – Displays the method flow the .NET process ran, detailing a list that starts from the last method
to the first method called in each thread.
• Memory Trace – Records the objects stored in memory while the .NET process ran, displaying both a count
of how many objects of the same type ran as well as the size (in kilobytes) of each object.
If the stack trace and/or the memory trace does not return enough performance information or if a crash dump
you generated from the Microsoft operating system did not return enough results, you can also use this feature
to generate memory dump files. A memory dump file takes a snapshot of the memory required to run the
selected process. You can then review this file through the Performance and Diagnostic Tool or send this file to
Epicor Technical Support for further analysis.
Tip To learn how to generate a crash dump from the Microsoft operating system, review the Crash
Diagnostics section in this guide.
You typically run this memory inspection process in a test environment that mirrors your production environment.
Running this test temporarily halts the process you are inspecting, so this can disrupt the Epicor ERP application
in your production environment. Although you can run a basic stack trace without much disruption, most memory
tests temporarily freeze the application. Running these tests against a live environment is an extreme measure,
and you only should do this when you cannot determine the cause of performance issues in your test environment.
If Epicor Technical Support requires more information, they may also ask you run memory tests against your live
environment.
You use both the Memory/Stack Trace and Memory Dump features to evaluate memory issues. You typically run
these memory tests through a two step process.
1. Memory/Stack Trace -- Record both a memory trace and a stack trace. This information will help Epicor
Technical Support identify which part of memory is growing and whether the stack trace is active. You
should generate this memory trace and stack trace three - four times; Epicor Technical Support will then
have multiple files to review.
2. Memory Dump -- Generate this large file to record how your system is using memory. When you create a
memory dump file, you capture a snapshot of the memory and stack traces used by the selected process.
You now have both a series of memory/stack trace files and a memory dump file. When you place your call with
Epicor Technical Support, send this files to them for further analysis. These files will help support more effectively
determine what is causing the memory issue.
Setup
Before you can run a live memory inspection, you must define some options within the Performance and Diagnostic
Tool.
3. The memory trace and stack trace use a color to categorize the calls made by the Epicor Framework (ICE)
and Epicor ERP application (ERP). You can then more easily see which system sent the call. If you like, you
can change these colors by selecting a different option on these drop-down lists:
• Erp Highlight Color – The default color is Dark Orange; use this drop-down list to select a different
color.
• Ice Highlight Color – The default color is Corn Flower Blue, use this drop-down list to select a different
color.
4. Indicate the Number of return rows (Memory Dump Compare) you want to display when comparing
memory dump results on the Post Mortem Memory Dump Compare sheet. By default, a maximum of
100 rows display in these additional results. Use this field to increase or decrease the number of rows that
return through this context menu option.
5. In the Pdb Path field, enter the path to a program database (PDB ) file that holds debugging and project
state information that allows incremental linking of a Debug configuration of your program.
6. Select the Using DebugDialog to capture a dump on First Chance Exception link to display a
documentation webpage from the Microsoft Developer Network (MSDN). This topic describes how to use
®
Microsoft's DebugDialog Tool to capture a memory dump for a First Change Exception (such as a crash
dump).
Tip You can also modify the Microsoft Registry Editor to generate a memory crash dump. To learn
how to set this up, review the Crash Diagnostics section in this guide.
7. When you finish setting up the Live Memory Inspection parameters, click OK.
Do the following to run a live memory test. Use these controls to run a stack trace, a memory trace, or both trace
options.
2. Select a Process ID for a process currently running on your server. This drop-down list displays all the current
w3wp processes with their application pool names as well as the names of any Epicor .exe processes. Typically
you select the application pool that runs the slow application server to gather the performance results you
need to review.
Tip You can also locate the Process ID, or PID, you need to inspect by analyzing the server trace log
or client trace log. Identity the process you wish to inspect and locate its PID value.
3. Now define what information you wish to see. Select the Memory Trace option to see details of the objects
stored in the memory of the process. This test generates a list of objects stored in memory while the .NET
process ran, displaying both a count of how many objects of the same type were activated as well as the
size (in kilobytes) of each object.
Only use this memory test option when you need, as the memory trace is more time consuming to run and
will cause the Epicor ERP application to freeze until it completes the test. Typically you should not run a
memory trace in your live environment.
4. Select the Stack Trace check box method to review the flow of the .NET process, detailing a list that starts
from the last method to the first method called in each thread. The default option, a stack trace runs quickly
(only 500-1000 milliseconds), so you can run this test in a live environment.
5. To generate the memory trace and/or stack trace, click the Analyze button.
6. You are warned this action will freeze the Epicor ERP application. If you run a memory trace, the application
freezes for about 30-50 seconds. Click Yes to run the memory inspection or No to exit the test.
If you ran a stack trace, review the results on the Stack Trace tab:
1. The ManagedThreadID column contains information about each thread that ran during the selected
process. Click the column header to sort the threads in ascending or descending order.
2. To group the threads together, click and drag the ManagedThreadID column to the Group By area.
3. If you would like to see more information about the methods that ran during a specific thread, right a
ManagedThreadID; from the context menu, select the Show details of this object reference option.
The methods that ran during this thread display. They display in order from the last method to the first
method called during the thread. For example, use this option when a method caused the Epicor ERP
application to crash. By expanding this thread, you discover an Update method caused the application
shutdown.
4.
Click on the Epsilon ( ) button to display the Summary window. You can summarize the results by
Count, Minimum, and Maximum options. When you finish selecting the Summarize options, click OK.
5. Click the Filter button to hide rows you do not wish to display:
• (All) – The default option. This displays all the calls you captured through the stack trace.
• (Custom) – Displays the Custom Filter window. Use the controls on this window to set up a filter
statement.
• (Blanks) – Causes the stack trace results to only display rows that did not have a ManagedThreadID.
• (NonBlanks) – Causes the stack trace results to only display row that have a ManagedThreadID value.
6. If you activated either the Just Epicor Code or the All code check boxes, additional plus (+) sign icons
display in the stack trace records. Expand these icons to review the values of objects handled by the stack
frame.
If the object is simple, the trace displays its Type and Value; if the object is complex, the trace displays its
Property Name, Type, and Value. To display the properties of a complex object, right-click it and select
the Show details of this object reference option. You can also right-click each property to display more
information.
Tip This information might not be as useful for an IT Manager, but Epicor Technical Support will want
to review these details.
7. To locate a specific thread or method call, enter a value in the Search field and press <Enter>.
The entries that match your search term display on the Stack Trace tab.
8. When you finish reviewing the memory inspection data, click the Clear button.
The Memory Trace tab displays how much memory the select process consumed during the inspection:
1. Click on a column header to organize the results in either ascending or descending order.
2. To group the memory calls together, click and drag a selected column to the Group By area.
3. One of the key columns to review is the Time to get data values. This shows you how long it took the
process to pull data from the database for display on the user interface.
a. Working Set - This value displays the total amount of RAM available at the time the process ran.
b. Private Working Set - This value displays how much RAM was required to run the process. By comparing
the Working Set and the Private Working Set values, you can see when the process required so much
memory, it slowed down the application.
The MiscInfo tab contains some additional information about the live memory test.
a. CPU Utilization - Indicates how much of the CPU's capacity was required to run the process.
b. Threads - Displays how many threads were available to run the process.
c. Number of CPUs - Indicates how many CPUs are running to support the process.
d. Server - Shows you how long the server ran to complete the process.
e. Date/Time - Displays when you generated a memory dump file to track the selected process.
2. Click on a column header to organize the results in either ascending or descending order.
3. To group the memory calls together, click and drag a selected column to the Group By area.
4. When you finish reviewing the memory inspection data, click the Clear button.
Use the Roots tab to identify the root cause of suspected memory leaks.
1. Click on a column header to organize the results in either ascending or descending order.
2. To group the memory calls together, click and drag a selected column to the Group By area.
3. When you finish reviewing the memory inspection data, click the Clear button.
Use the Objects tab to view the impact of the object types on memory use, and to find code in your app that
uses memory inefficiently. If you are repeatedly seeing w3wp.exe take more memory than these threshold levels
over a period of several days, you may be experiencing a memory leak.
1. Click on a column header to organize the results in either ascending or descending order.
2. To group the memory calls together, click and drag a selected column to the Group By area.
3. Click the DataTableView button to create a dynamic view of a single set of data, much like a database
view, to which you can apply different sorting and filtering criteria.
The Process tab lists all processes in the current server including their memory utilization as well as their CPU
usage over time.
a. Private Memory Bytes - Displays the current size, in bytes, of memory that this process has allocated
which cannot be shared with other processes.
b. Working Set Bytes - Displays the size of the working set, in bytes, used for this process only and not
shared by other processes.
c. CPU Time - Indicates how much of the CPU's capacity was required to run the process. It is the amount
of time for which a central processing unit (CPU) was used for processing instructions of the process.
3. Click on a column header to organize the results in either ascending or descending order.
4. To group the memory calls together, click and drag a selected column to the Group By area.
Export to Excel
® ®
You can also display the memory trace and stack trace information in Microsoft Excel . This program has more
options for reviewing the memory inspection data.
2. Navigate to a directory file location where you want to save the exported file.
4. Click the Save as Type drop-down list to select the version of Microsoft Excel you wish to use.
5. Click Save.
You can now open this exported file in Microsoft Excel. Besides using this file for your own analysis, you can send
this file to Epicor Technical Support.
Important As a best practice, you should frequently store your test results in Excel. If Epicor Technical
Support requests memory test information, you will already have it available for them to review.
Memory Dump
The Memory Dump feature is a separate task from the live memory test. When you create a memory dump file,
you record a snapshot of the memory and stack traces used by the selected process.
You can then review this memory dump file or pass it along to Epicor Technical Support for further analysis.
Specifically, Epicor Technical Support can help you determine the causes of memory leaks.
Be sure you have enough disk space to create these memory dump files. Each memory dump file will mirror the
size of the memory foot print (sometimes as much as 30 gigabytes (GB), so these files require a lot of disk drive
space.
The memory dump file you generate through this feature is different from the crash dump file you can generate
from the Windows operating system. You can set up Windows to automatically generate a crash dump file when
the operating system stops a process. The crash dump file is useful for determining what process caused the
exception that shut down the application. The memory dump file instead records what memory was required
while a specific process ran. It reduces how much you need to review the WinDebug tool (included with your
Windows oeperating system). Memory dump files contain less information and so are easier to review.
You should generate three dump files; generate one before the process starts, another file during the middle of
a process run, and then a third file at the end of the process run. You can then compare and contrast the memory
usage during different points during each part of the process cycle.
To generate a memory dump:
1. From the Process ID drop-down list, select the process you wish to review.
2. Click the Take Memory dump button. This generates a memory dump file before the process runs.
4. You are warned this action will freeze the Epicor ERP application. If you run a memory trace, the application
freezes for about 30-50 seconds. If you generate a stack trace without the Just Epicor Code or All Code
options, the application freezes for a shorter time. However when you run these tests in a live environment,
users will most likely notice their processes slow down. Click Yes to run the memory inspection or No to
exit the test.
5. Next while the process runs, click the Take Memory dump button.
6. Wait until the process ends. Now click the Take Memory dump button again.
If you will analyze a memory dump or a crash dump from a different computer, you need the mscordacwks (DAC)
file for the machine that generated the .dmp file. This item is the Data Access Component file, which allows the
PDT to interpret the memory data structures that maintain the state of a .Net application file.
To access this file:
1. Log into the server machine that generated the .dmp file.
4. Select the Get DAC for the current computer (x86 and x64) option.
6. Click Save.
This downloads a .zip file that contains the x86 and x64 versions of the DAC.
10. Now return to the Performance and Diagnostic Tool; navigate to the Process inspection tab.
11. For the Dump location, click the Browse (…) button to find and select directory path that contains the
external memory dump or crash dump file.
12. Notice the External DAC field. Click the Browse (…) button next to this field and navigate to the folder
where you extracted the External DAC file.
14. You are warned this action will freeze the Epicor ERP application. If you run a memory trace, the application
freezes for about 30-50 seconds. Click Yes to run the memory inspection or No to exit the test.
You can use this DAC file to compare results between the memory dump you generated for the current system
against the memory dump generated on the external machine. If you want Epicor Technical Support to review
these files, you will also need to send this external DAC file with the memory dump files.
You next use the Post-mortem Memory Dump compare sheet to evaluate the results from two .dmp files:
1. Within the Performance and Diagnostic Tool, navigate to the Post-mortem Memory Dump compare
> Diff sheet.
2. Select the First Snapshot memory dump file you wish to use for the comparison. Click the Browse (…)
button next to this field to find and select this .dmp file.
3. Now select the Second Snapshot memory dump file you wish to use for this comparison. Click the Browse
(…) button next to this field to find and select this .dmp file.
4. If one of the snapshot .dmp files generated on an external machine, you add the External DAC file before
you run the analysis. Click the Browse (…) button to find and select the directory where you expanded this
.zip file; for information on downloading and unzipping this file, review the previous Get External DAC
topic.
6. To see details on each type, right click the grid; from the context menu, select the show objects of this
type option.
If the object is simple, the trace displays its Type and Value; if the object is complex, the trace displays its
Property Name, Type, and Value.
7. To see the methods called during the memory dump, click the Roots tab.
8. Notice you can sort the methods in either Ascending or Descending order.
9.
Click on the Epsilon ( ) button to display the Summary window. You can summarize the method results
by Count, Minimum, and Maximum options. When you finish selecting the Summarize options, click OK.
10. Click the Filter button to hide rows you do not wish to display:
• (All) – The default option. This displays all the calls you captured through the stack trace.
• (Custom) – Displays the Custom Filter window. Use the controls on this window to set up a filter
statement.
• (Blanks) – Causes the stack trace results to only display rows that did not have a ManagedThreadID.
• (NonBlanks) – Causes the stack trace results to only display row that have a ManagedThreadID value.
11. Repeat these steps to load in other memory dump files for analysis.
Use the results to compare which methods consumed the most memory during the selected times recorded
within the two memory dump files.
Export to Excel
® ®
You can also display memory dump information in Microsoft Excel . This program has more options for
reviewing the memory inspection data.
2. Navigate to a directory file location where you want to save the exported file.
4. Click the Save as Type drop-down list to select the version of Microsoft Excel you wish to use.
5. Click Save.
You can now open this exported file in Microsoft Excel. Besides using this file for your own analysis, you can send
this file to Epicor Technical Support for further evaluation.
Important As a best practice, you should frequently store your memory dump files in Excel. If Epicor
Technical Support requests memory dump information, you will already have it available for them to review.
Network Diagnostics
You can use the Performance and Diagnostic Tool to verify the baseline network and server performance are
running at optimal levels. Use this feature to evaluate the Network Test standard metric.
To do this, run multiple tests to gauge the overall performance of your network, and compare these results
against the network standard metric. Just like the Configuration Check, you need to update the fields on the
Settings > Options window so the tool connects to the application server.
You can also run this same test on client installations to compare the client results against the server results. If
there is latency on the client network, you will find variations in these test results.
Run this test to verify that the baseline network and server performance are running at optimal levels.
3. Click on each bar graph to view specific details of each network test.
The performance results for the selected test display in the fields above the graph. These results use the
Minutes: Seconds: Milliseconds format.
• Least - Displays the performance time for the shortest call. For example, .16 (16 milliseconds).
• Longest - Displays the performance time for the longest call. For example, 2.50 (2 seconds, 50
milliseconds).
• Average - Indicates the average performance time it took to finish each call. For example, .41 (41
milliseconds).
• Average Network Time - Indicates the average performance time it took each call to travel across the
network. For example, .14 (14 milliseconds).
4. You can also review the data performance for each call:
• Average MB Transferred - Displays how much data, in megabytes, was transferred on average during
each call. For example, 2.42 megabytes.
• Average MB/second Transferred - Displays how much data transferred on average during each second
of the method call. For example, 0.06 megabytes.
® ®
5. You can also view these test results in Microsoft Excel . To do this, click the Export to Excel button.
Expected Results:
• Server Time (blue/lower bar) < 0.5 Seconds
• Network Time (green/upper bar) < 0.4 Seconds over a LAN
• Network Time (green/upper bar) < 7 Seconds without compression or < 1.5 seconds with compression over
a WAN
Server Diagnostics
You analyze the performance of server installations through server tracing logs. This section of the guide describes
how you set up these logs and analyze the results.
Important To begin evaluating performance, you should always turn on both the client and server tracking
logs. These logs will help your team, Epicor consulting and support professionals, and network professionals
determine the cause of the performance issue(s). By having a series of tracking logs available, your
performance issue(s) can be more throughly evaluated and resolved.
Write Permission
Be sure the user account you use to access the server machine has write permission to the selected folder. Do
the following to verify you have access.
1. Launch Internet Information Servers (IIS) Manager™. Depending on your operating system, you launch
this tool in different ways:
a. If you are on Windows SQL Server 2008 R2 or Windows 7, click Start. In the Search field, enter Internet
Information Services; when this program appears in the results, select it.
b. If you are on Windows SQL Server 2012 or Windows 8, press the <Windows> + F button to display the
Charms bar; from the Apps screen, select Internet Information Services (IIS) Manager.
4. Now from the Actions pane, select the Advanced Settings... option.
The Advanced Settings window displays.
5. Locate the Identity setting and click the Browse (...) button.
7. From the drop-down list, select the Application Pool Identity option.
Before you run the server log analysis, you can define some options for the server log results.
3. If you select the Auto Save Layout and Filters on exit check box, the Performance and Diagnostic Tool
will save the current sequence of the grid columns as well as the filters you have created.
Use this feature to organize the Results, Summary Analysis, and Errors and Messages grids to display
the sequence you need. The next time you launch this tool and generate server log results, the information
displays using the saved layout and filters.
4. Click the UTC column Time Zone drop-down list to select the time zone in which this server log file was
created.
When you generate the server log results, the UTC column converts date/time values to use the selected
time zone, and these values display in the Results grid. The selected time zone will also display in the UTC
Column on this grid.
5. Click OK.
The options you select on this window become the default options for the server log results. Each time you
generate results for a server log, the Results grid displays the data using these options.
As you evaluate performance issues, you will most likely create multiple application server logs. How many log
files you save, the details you include, and how large you let them grow depends on your preferences and testing
requirements.
You should be able to save as many application server logs as you need. The number of server logs you create
and the server details you wish to include depends on what aspect of the Epicor application you are evaluating.
If you are evaluating system performance on a daily basis, you will most likely create at least one new application
server log each day. However if you are evaluating when peak server activity occurs, you will most likely record
a week or several weeks' worth of activity.
By default each application server log file is limited to a size you specify. When one file reaches this size limit, the
Epicor Administration Console creates a new log file. It uses the original file name as a prefix and then adds a
date time stamp (the UTC date and concatenated time). You define these limits in the Max Log Size and Max
Log Files fields described below.
Each log also includes a series of server detail values. Depending on the purpose for creating the log, you may
not need to track all of these details. The <appserver>.config file includes a setting that prevents these details
from recording in the log.
2. Use the tree view to expand the Epicor Server node; select the application server to which you want to
connect.
4. If you are connecting to a remote server and your domain user account does not have administration rights
to this server, a Network Credentials window will display. Enter the domain\user name and password for
an account that can connect to the remote server. This domain user account must have full administrative
rights on this server machine.
If the Epicor Administration Console still cannot connect with IIS using this account, the Network Credentials
window will appear again. Click Cancel and this window will no longer display. The application pool indicator
will now show the Unknown status. Contact your system administration to set up a domain user account
that has the rights required to connect with this server.
5. When you finish entering this domain user account, click OK.
The New Session window displays. Notice the application server URL displays in the Connecting to field.
6. From either the Action menu or the Actions pane, select Application Server Settings.
The Application Server Settings window displays.
7. Use the fields on this window to activate the application server log and determine what information this
log gathers. Select the Trace Log Enabled check box.
8. Enter the File Location. This field indicates where you want the application server log to generate. Either
enter this path directly or click the Browse (...) button to find and select this directory path.
9. To avoid running into disk space issues, you can control the size and number of logs you want to maintain.
Use the Max Log File Size field to define how large you will allow each file to grow. Enter the size limit
and then select a size option from the accompanying drop down list. Available options:
• Bytes
• Kilobytes
• Megabytes
• Gigabytes
10. When each log file reaches this size limit, it creates a new log file. To limit how many log files the application
server will create, enter a number in the Max Log Files field. The application server will generate this number
of log files and then it will stop gathering server log data.
11. Next define what Standard Logging information you want the application server log to record. If you only
are tracking a specific database activity, just activate one of the specific options. Server logs are easier to
review if you only capture the types of database activity you require. Be aware that some options do not
harm performance (production friendly), while other options can reduce performance. Available options:
a. Verbose Logging - The default option, select this check box when you want the log to record all calls,
triggers, and exception messages sent to the application server. If you wish to see any business logic
exceptions, you must select this check box. This option is production friendly.
b. Trigger Hits - When a record is sent to the database to be added, updated, or deleted
(Write/Update/Delete), the framework creates an event in which SQL Server intercepts the call and
performs table specific logic. After this event is processed, the record is sent to the database. Select this
check box to record these trigger events in the server log.
c. Detailed Exceptions - Indicates you want to record the complete details of each exception message.
The full stack trace of the exception is included in the server log. You then see which items in your Epicor
ERP application were affected by the exception. This option is production friendly.
d. ERP DB Hits - Activate this check box to track how the Epicor ERP application interacts with the database.
You can review each database hit as well as how long it took each hit to complete.
e. BPM Logging - Select this check box to record Business Process Management (BPM) method calls. Each
time user activity activates a BPM directive, the application server log records the business object method
that was called and how long this call took to complete. This option is production friendly.
f. BAQ Logging - Select this check box to record Business Activity Query (BAQ) database calls. Each time
user activity activates a BAQ, the application server log records which query was called and how long it
took this BAQ to gather the data results. This option is production friendly.
Tip Remember that the Verbose Logging, Detailed Exceptions, BPM Logging, and BAQ
Logging options are production friendly.
12. Indicate which Advanced Logging information you want to include on the application server log. These
options record calls from the overall system server, and may impact performance while active. Available
options:
a. System DB Hits - Select this check box to record all the hits the database receives from SQL Server. Use
these values to determine the performance of SQL Server.
b. System Table Methods - Activate this check box to track the method calls being placed against the
system tables.
c. SQL Query Detail - Select this check box to have the application server log include the details of the
SQL queries performed by the application.
13. When you finish making your selections, click Apply and then OK.
14. The Server Manager dialog box displays, asking if you want these log settings to activate. If this is a good
time to begin generating results in the application server log, click Yes.
The application server restarts, using the selected trace log options. Your selected trace log settings are written
to the AppServer.config file. When you select a tracing option, you activate the <TraceFlag> setting in this
configuration file, and these settings determine what the application server log records. The AppServer.config
file is located in the DeploymentServer directory.
Important After you gather the system information you need, be sure to return to the Epicor Administration
Console and de-activate your log setting options. This reduces unnecessary calls to the server and improves
performance.
This tracing log records any business logic exceptions internal to the Epicor ERP application. For example, this log
records an error when too many characters are entered in a field, a record already exists, how long it takes a
business object method to run, and so on.
Any errors that occur outside of the internal business logic are not recorded in the application server log. Examples
of items not captured by this log include framework exceptions, security exceptions, fatal errors, and similar
items. You can review these errors through the Event Viewer. You access this Windows tool through a control
panel. Review the Event Viewer topic later in this guide for details on how to launch and use this Windows tool.
a. Launch Windows Explorer and navigate to the directory that contains the server log. Click and drag
the client log icon from Windows Explorer to the Add Files grid in the Performance and Diagnostic
Tool window.
b. Click the Add Files... button and browse to the location of the server log file. You can also click the
Down Arrow next to the Add Files button to select server log files from multiple server locations. You
may need to do this on load balanced systems where the application is located across multiple servers.
The selected server log file displays on the Add Files grid. Continue to add the server log files you need to
evaluate.
4. Select the Parse when exceeded threshold check box to parse all nodes from the Results tab when the
Millisecond threshold is exceeded. You can then see more information about each entry in the Results
tab.
5. Select the Ignore GetRowsKeepIdelTime check box to filter any system calls sent to the server.
The Performance and Diagnostic Tool will not display values from the GetRowsKeepIdleTime business
object method. This method runs in the background while a current form is active, but it does not indicate
how long it takes other methods like Update, GetNew, and GetList to run. These method calls are generated
by user activity.
6. To filter the logs by date, use the From date and To Date fields to select a specific date range. All server
logs generated in this period are moved to the Results folder. Type a specific date in the Go To Date field
to scroll the Results window to the date specified into view.
The Performance and Diagnostic Tool analyzes the server log. You review this data on the Results, Summary
Analysis, Running Processes, and Log Errors sheets.
Analyze Results
Use the features on the Performance and Diagnostic Tool to review the results recorded on the selected server
log. This topic explores the features available for analyzing the server logs.
To analyze the server log results:
1. Click on the Results tab to review the server calls that were larger than the Millisecond Threshold you
entered.
The Results grid displays.
2. You can filter the log results based on values that populate each column. To do this, click on the Funnel
icon that displays on each column header.
A filter window displays. The filter options that display depend on the field type and the column data.
3. By default, you can filter on (All) the records in the column. This indicates that every record in this column
displays.
4. To display records where this column does not have a value, select the (Blanks) option.
5. You can also filter this column on a specific value. To do this, first clear the (All) check box and then select
the records you want to include in the filter. For example, you can clear (de-select) all the check boxes except
the Workstation IDs you want to review. Entries that have the selected Workstation ID values display in the
results grid.
6. Each filter window also has a <FieldType> Filters sub-menu. This sub-menu contains a series of filter
options you can use to limit the results based on the field type.
For example, if you are filtering on a text field, you have Text Filters options like Equals..., Does Not Equal...,
Begins With..., Ends With..., and so on.
8. Use this window to create custom filter conditions. You filter against the selected column, such as Client
Workstation ID. You then select a expression from the middle drop-down list (such as Greater than), and
then enter the custom value in the right drop-down list.
11. When you finish creating the custom filters, click OK.
12. Use the More Information column to only display records that have business process management (BPM)
directives and/or business activity query (BAQ) information. Click the Funnel icon within this column.
13. Clear the (All) check box and then select the #BAQ# #Sql# option and the #BPM# #Sql# options.
Notice you can select a number of options. Available filters:
• #Exception#
• #Exception# #Sql#
• #Exception# #Trigger#
• #Sql#
• #Trigger# #Sql#
• #Trigger# #Sql# #BPM#
15. You can maximize the grids by pressing the <F11> key or selecting the Options > FullScreen menu option.
The interface now fills your screen, displaying more columns and information on the Results, Summary
Analysis, Errors and Messages, and the GetRowsKeepIdleTime Chart grids. These grids keep any
options you selected on the default view. To return the Performance and Diagnostic Tool to the default
view, click the Leave full screen mode option on the title bar.
16. To help organize the results, use the Group By feature on the grid. Click and drag a column header (for
example, Object Name) to the Drag a column header here to group by that column area. You can drag
multiple column headers to further structure the results as you need.
Example If you group by using the Workstation ID column, the grid displays all the workstation
entries together in alphabetical order.
17. You can review the following items on each business method call:
a. Appserver Thread - Indicates which thread from the application server was used to run the business
method call.
b. Client Workstation ID - Displays the workstation from which the business method call was sent.
c. ERPUser - Displays the user who initiated the business method call.
d. Exceeds MS Threshold - If this check box is selected, it indicates this server call took longer to execute
than the Millisecond Threshold value you entered.
e. Execution Time (ms) - Indicates how long in milliseconds it took the server to complete the current
business method call.
f. FileNumber - Displays the number of the server log file from which the current row generated. The Add
Files grid displays the matching FileNumber column, so you can use this value to open this server log
.txt file.
g. LineNumber - Displays the line number in the server log file from which the current row generated.
You can then open the server log .txt file and locate this line.
h. Machine - Displays the computer that originated the business method call. If you are analyzing several
files from multiple servers (for example, if you are in a web farm), this column helps you identify the
source of the call.
i. Method Name - Displays the name of the method that generated the call. Some example methods
include GetList, ChangeNeedByDate, KitPartStatus, and so on.
j. MoreInformation - Indicates whether this business method call originated from a business activity query
(BAQ) or a business process management (BPM) directive. You can use this column to filter the Results
grid to only display BPM and BAQ calls.
Tip To see this additional information, activate the BAQ and/or BPM logging options on the
application server. Launch the Epicor Administration Console, use the tree view to expand the
Server Management node, and select the application server. From the Actions pane, launch the
Application Server Settings window. Within the Standard Logging section, select the BPM Logging
and/or BAQ Logging check boxes.
k. Object Name - Displays the name of the business object that generated the call. Some example objects
include GLJournalEntry, APInvoice, Job Entry, and so on.
l. Type - Indicates the business object that generated the business method call. Typical values in this column
are Erp.BO (application calls), Ice.BO (framework calls), and Ice.Lib (framework resource calls).
m. UTC - Displays the Coordinated Universal Time zone used for the business method call.
18. Click the Clear Filters button to restore the default layout for the grid columns and remove all filters from
the grids. This resets the Results, Summary Analysis, and Errors and Messages grids to their original
base, unfiltered layout.
19. To remove the generated data results, click the Clear Results button.
Tip The server log contains an additional node that records more details about each task sent to the server.
When you view the log in a text editor (such as Notepad), the <RunTaskInfo> node displays with each task
entry. This node identifies the task number, description, and run procedure. For example:
• <RunTaskInfo msg="SysTaskNum: 1, Task Description: Change Log Report, RunProcedure:
Ice.Internal.XA.ChgLogReport.dll" />
The Results grid also has other options you can activate from its context menu. Use these options to copy the
data or display this data in different ways.
To use the context menu:
2. Select the Copy All option to place the rows in the Windows clipboard. You can then paste this data into
® ®
a text editor, Microsoft Word , and other applications.
3. Use the Copy All Include Labels option to copy both the rows and the column labels. When you paste
this data in another application, the column labels display above the rows.
4. If you just want to copy a few rows, select specific rows by using the <Ctrl> button or select a range of
rows using the <Shift> button. Then right-click the grid and select the Copy Selection option.
5. To copy both the selected rows and the column labels, select the Copy Selection Include Labels option.
6.
Select Show Summaries... to display the Epsilon button ( ) on each column. Click this button to launch
the Show Summaries window.
You use this window to summarize data on the column through the Average, Maximum, Count and
other options.
7. To reset the Results grid to its original layout and remove any filters you have created, select the Clear saved
layout and filters option.
9. Click the Show more details option to display all the available information for each call in a new sheet.
The Operation Details sheet displays the current row and it's child rows to be analyzed individually and
lists BPM, BAQ, Exceptions or other entries as part of the current row, if any.
10. Navigate to the SQL Hits sheet to review SQL information for the current row.
This sheet displays each SQL query that the current row executed alongside its arguments and the values
sent to SQL Server by Epicor. It also displays the stack trace of each of the SQL calls so that your Epicor
support person can aid the Epicor Development team pin point where was a query executed, as it shows
the time that each query took to execute.
Note
To display SQL information for a row, ensure the following trace flags enabled in your server
configuration:
• § profile://system/db/hits
• § profile://system/db/epiprovider
• § profile://system/db/stacktrace
Important If you enable the above trace flags, the application server may become unresponsive and
perform slower than expected. It is recommended to enable them for brief periods of time or in a test
server or any other test environment.
11. To view the source location, right-click a stack trace and select Open Source Location.
Note The option is defined in the Settings > Global Options sheet.
The Summary Analysis sheet calculates the total performance results for each business object method.
To review the server log summary:
2. Notice you can group the results by various columns on this grid.
3. You can review several items on each business method call. Some key columns on this sheet include:
a. Frequency - Indicates how many times the business object method sent a call to the server.
b. % - Freq - Displays the percentage this call was run compared to other method calls captured in this
log.
c. Total Execution Time - Displays the total time this business method ran during all the business method
calls recorded within this server log.
d. Average Execution Time - Displays the average length of time it took this business method to start
and end each business method call.
e. Longest - Displays how much time it took the longest call for this business method to complete.
f. Least - Displays how much time it took the shortest call for this business method to complete.
g. % - Time - Contains the percentage this call ran compared to other method calls captured in this server
log.
4.
Each of these columns has an Epsilon ( ) button. Click this button to display the Show Summaries
dialog box.
5. Use this dialog box to select the summaries you wish to view. Available options:
• Average
• Count
• Maximum
• Minimum
• Sum
6. Click OK.
Your selected options display at the bottom of the grid. They appear below the Grand Summaries row.
Review the Errors and Messages sheet to analyze any errors or additional messages that may display in the server
log.
To analyze the server log errors and messages :
2. You can review several items on each error. Columns on this sheet include:
a. LogLine - Displays the line from the server log that recorded the error or message.
b. ClientID - Displays the workstation that caused the error or message to appear.
c. ObjectName - Contains the name of the business object that generated the error or message. Some
example objects include GLJournalEntry, APInvoice, Job Entry, and so on.
d. Method Name - Displays the name of the method that generated the error or message. Some example
methods include GetList, ChangeNeedByDate, KitPartStatus, and so on.
e. StartDateTime - Displays the date and time this error or message began.
f. Appserver Thread - Indicates which thread from the application server the error or message occurred.
g. ERPUser - Displays the user who initiated the business method call that caused the error or message.
h. Duration (ms) - Indicates how long in milliseconds the error or message ran.
i. LineNumber - Contains the line number in the server log file from which the current row generated.
You can then open the server log .txt file and locate this line.
j. FileNumber - Displays the number of the server log file from which the current row generated. The Add
Files grid displays the matching FileNumber column, so you can use this value to open this server log
.txt file.
3. When you finish reviewing the server log, click the Clear Results button.
The GetRowsKeepIdleTime method can help you analyze how well the server interacts with Epicor client
installations. You can review this standard metric through the GetRowsKeepIdleTimeChart in the Performance
and Diagnostic Tool.
The GetRowsKeepIdleTime method is used by the smart client. The client checks the application server to find
out if the report or process has finished its run. If the run is finished, the data is sent back to the client as output
for this method call. The client then uses this call to populate the results to the database or report. Because this
method is a system check regularly sent to the server that typically does not return data, you can also use this
method to measure network traffic that may impact performance.
You use the Performance and Diagnostic Tool to display the GetRowsKeepIdleTime chart that shows you the
performance time of each method call in the server log. You can then pinpoint specific times of the day when
there was increased server activity that affected performance.
To evaluate performance for GetRowsKeepIdleTime method calls:
1. You first need to indicate you want to generate the GetRowsKeepIdleTime Chart. Within the Performance
and Diagnostic Tool, click Settings > Options.
The Settings window displays.
4. Now indicate the Time Interval to Chart (minutes) option. This value defines the span of time you want
to evaluate at each point in the GetRowsKeepIdleTime Chart. The default value is one minute.
5. Click OK.
You return to the main window of the Performance and Diagnostic Tool.
6. As described previously, click the Add Files... button to find and select the server log you wish to review.
The chart displays using the time interval you selected. The left side of the grid indicates how long it took
to run the GetRowsKeepIdleTime call, while the bottom of the grid indicates the time of the day when the
call occurred. The expected result is that these calls should run using an average of 15 milliseconds and
demonstrate low performance variability throughout the day. Each call should always be less than 30
milliseconds.
10. Optionally, you can click the Export to Excel button to save this graph as an .xslx spreadsheet.
You can then use Microsoft™ Excel™ to refine this graph as you need. You and your Epicor consultant use
this spreadsheet to evaluate the overall performance of your system.
Export Results
® ®
You can further analyze the server diagnostic results by exporting this information into Microsoft Excel or
® ®
Microsoft SQL Server Management Studio .
Leverage the tools in these applications to more deeply analyze the performance data. When you export the
results to Microsoft Excel, you can use this application to display the data using tables and graphs. When you
export the results to SQL Server, you place these results into a series of SQL tables and views. You can then use
SQL Server Management Studio to run queries against these results.
1. To save the results in a Microsoft Excel file, click the Export to Excel button.
The Save As window displays.
3. Enter a File name that helps you easily identify the purpose of the .xlsx file.
4. Click Save.
You can now launch the .xlsx spreadsheet and review it in Microsoft Excel.
5. You can also export the client log results into SQL Server Management Studio. Click the Down Arrow on
the Export to Excel button; select the Export to SQL option.
6. The Optional Export Description window displays. By default it selects the first title name.
You can either leave it blank or enter a name. If there are multiple files, the first line title is automatically
selected.
7. Click OK.
9. In the Connection tab, select the SQL Server to which you want to link to the results data.
10. Now enter the User name and Password you use to log into this server.
11. From the drop-down list, select the database that will receive the client diagnostic results.
12. Verify you can export the results to this database by clicking the Test Connection button.
13. A dialog box displays indicating the connection is successful. Click OK.
You return to the Data Link Properties window.
16. Now open SQL Server Management Studio and review the schema created by the PDT, you can view
some of the new tables.
• Export Run - main table which will display the user selected Optional Export Description as well as a
few other related fields, the main of which "Export ID" should be used to differentiate between exports
executions at each of the child tables.
• ExportZipFiles - table which contains a link to the ExportRun table that will contain a zipped version of
the file(s) exported to SQL in binary format as well as the path from which the files were exported from.
• Client_<TableName> - multiple tables per each of the items displayed in the PDT application, you can
now do queries against them and extract the required data that you may need.
Within SQL Server Management Studio, you now display the diagnostic results. You can review these results in
a grid. The results also display in a tables under the Tables node, and a series of Views generate as well. Some
examples:
Fields
This topic documents the fields and sheets available for analyzing the server log files.
Some fields on the interface have a context menu, which is indicated by a triangle in the upper right corner of
the field. To open the context menu, right-click on the field.
Add Files
Click the Add Files button to select one or more server log files for analysis. Depending on what you are evaluating,
multiple server.log files may have been generated. You can then analyze these files together within the Performance
and Diagnostic Tool.
You can also click the Down Arrow next to the Add Files button to select server log files from multiple server
locations. You may need to do this on load balanced systems where the application is located across multiple
servers.
Clear Filters
Click this button to restore default layout for the grid columns and remove the filters you've created. This resets
the Results, Summary Analysis, and Errors and Messages grids to their original base layout. You can now
click the Generate Diagnostics button again and display unfiltered results throughout these grids.
Clear Results
Click the Clear Results button to remove the log file information from the Results, Summary, Errors and Messages,
and GetRowsKeepIdleTime Chart sheets.
Clear Selected
Click the Clear Selected button to remove the log file paths you have currently loaded into the Performance and
Diagnostic Tool.
Epicor Version
Use this drop-down list to select the application version run with the selected server log(s). The default version
is 10.0.1.
Export to Excel
® ®
Click the Export to Excel button to export results to Microsoft Excel for further manipulation and analysis. The
exported data is based on which grid is currently visible. For example, if the Results grid is visible, the data on
this grid is exported.
Generate Diagnostics
Click the Generate Diagnostics button to create the server log file data results.
GetRowsKeepIdleTime Chart
Use the GetRowsKeepIdleTime chart to display the performance time of each GetRowsKeepIdleTime method call
in the server log. This method call is run by the System Monitor to check for server related activity like uploading
reports. You can then pinpoint specific times of the day when there was increased server activity which affected
performance.
Go To Date
Type a specific date to scroll the Results window to the date specified into view.
Ignore GetRowsKeepIdleTime
Click this check box to hide all GetRowsKeepIdleTime method call information from the server analysis results.
This method call is run by the System Monitor to check for server related activity like uploading reports. Because
this method is called frequently by every client on your system, select this check box to prevent this active calls
from appearing in the log results.
Millisecond Threshold
Any BusinessObject.Method calls greater than the value entered in this field have their Exceeds Threshold
check boxes automatically selected. This value indicates these calls are over the threshold value (in milliseconds)
you defined. Use this value to perform Sort By or Group By actions to review longer duration method calls.
Results Sheet
The Results sheet displays all calls received by the application server during the run of the server log. You can use
Group By functionality to group the results by Object and/or Method to see the execution times for all method
calls.
Summary Sheet
The Summary sheet summarizes the results for each BusinessObject.Method, showing the number of calls and
the Total/Average/Least/Longest duration of those calls. These duration values are measured in milliseconds. You
can sort the results by column to determine the types of calls made most frequently against the server, or which
calls consume the most time.
Scenarios
This topic contains examples of how you can measure server performance through different methods.
Log Capture
To analyze performance, you need to review application server logs, report server logs, and database server logs.
Use the Log Capture feature to gather these logs into a specific folder.
To leverage this feature set, you first enter the default paths to your network and Epicor ERP server. You then
enter the directory path where you want the Performance and Diagnostic Tool to place these log results. Run
the log capture and these files are copied to this target results location.
This feature automatically collects server logs, all matching .config files (based on the network path), the web.config
file from the Epicor ERP folder, event logs, the SQL Server error log (SQL ERRORLOG). It can also run the
configuration check.
When you are ready to analyze server performance in the Performance and Diagnostic Tool, navigate to this
directory path to select the logs you want to review. You can also compress these files into an archive file and
then send this archive to Epicor Technical Support or your consultant for further analysis.
Before you can use the Log Capture feature, you need to define some settings on the Options window.
5. Navigate to the Config directory within the client installation. For example: C:\Epicor\ERP10\Client\Config
6. Select the .sysconfig file you use to launch your Epicor environment and click OK. For example:
default.sysconfig
The settings from the selected .sysconfig file populate the Application Setup sheet.
7. If the .sysconfig file did not contain a valid Epicor user account, enter this account in the User ID and
Password fields.
9. You can first define the Network Credentials for the account that can be used to access the captured
logs. An optional setting, you need to enter this network account if the user running the log capture does
not have a network account. Enter these values:
a. Domain - The name of the operating system domain (Windows, Unix, Linux, and so on) for this user
account.
b. User Name - The identifier for the user account that can access the operating system domain.
c. Password - The password that can access the operating system domain.
Now when users try to capture logs but do not have an account with network access, they can select the
Use network credentials check box on the Log Capture screen.
10. Enter the Results directory path for the folder that will gather the captured logs. You can enter a directory
path on your local client machine or a path on your server machine. If you do not enter a folder, the default
C:\Users\<YourUserName>AppData\Local\Temp\ directory will store the captured logs.
® ®
11. Now define the .NET Default Path for the Windows .NET operating system. For example:
C:\Windows\Microsoft.NET\Framework\<VersionNumber>\Config
12. Enter the E10 Instance Path that points to your Epicor ERP server. For example:
D:\EpicorSites\EpicorERP\Server
13. Next enter the SQL Logs Path. This value defines the directory path and folder that stores your server logs.
For example: C:\Program Files\Microsoft SQL Server\MSRS12.SQL2014\Reporting Services\LogFiles
You will now capture the logs using the paths you specified on this window.
As you evaluate performance issues, you will most likely create multiple application server logs. How many log
files you save, the details you include, and how large you let them grow depends on your preferences and testing
requirements.
You should be able to save as many application server logs as you need. The number of server logs you create
and the server details you wish to include depends on what aspect of the Epicor application you are evaluating.
If you are evaluating system performance on a daily basis, you will most likely create at least one new application
server log each day. However if you are evaluating when peak server activity occurs, you will most likely record
a week or several weeks' worth of activity.
By default each application server log file is limited to a size you specify. When one file reaches this size limit, the
Epicor Administration Console creates a new log file. It uses the original file name as a prefix and then adds a
date time stamp (the UTC date and concatenated time). You define these limits in the Max Log Size and Max
Log Files fields described below.
Each log also includes a series of server detail values. Depending on the purpose for creating the log, you may
not need to track all of these details. The <appserver>.config file includes a setting that prevents these details
from recording in the log.
2. Use the tree view to expand the Epicor Server node; select the application server to which you want to
connect.
4. If you are connecting to a remote server and your domain user account does not have administration rights
to this server, a Network Credentials window will display. Enter the domain\user name and password for
an account that can connect to the remote server. This domain user account must have full administrative
rights on this server machine.
If the Epicor Administration Console still cannot connect with IIS using this account, the Network Credentials
window will appear again. Click Cancel and this window will no longer display. The application pool indicator
will now show the Unknown status. Contact your system administration to set up a domain user account
that has the rights required to connect with this server.
5. When you finish entering this domain user account, click OK.
The New Session window displays. Notice the application server URL displays in the Connecting to field.
6. From either the Action menu or the Actions pane, select Application Server Settings.
The Application Server Settings window displays.
7. Use the fields on this window to activate the application server log and determine what information this
log gathers. Select the Trace Log Enabled check box.
8. Enter the File Location. This field indicates where you want the application server log to generate. Either
enter this path directly or click the Browse (...) button to find and select this directory path.
9. To avoid running into disk space issues, you can control the size and number of logs you want to maintain.
Use the Max Log File Size field to define how large you will allow each file to grow. Enter the size limit
and then select a size option from the accompanying drop down list. Available options:
• Bytes
• Kilobytes
• Megabytes
• Gigabytes
10. When each log file reaches this size limit, it creates a new log file. To limit how many log files the application
server will create, enter a number in the Max Log Files field. The application server will generate this number
of log files and then it will stop gathering server log data.
11. Next define what Standard Logging information you want the application server log to record. If you only
are tracking a specific database activity, just activate one of the specific options. Server logs are easier to
review if you only capture the types of database activity you require. Be aware that some options do not
harm performance (production friendly), while other options can reduce performance. Available options:
a. Verbose Logging - The default option, select this check box when you want the log to record all calls,
triggers, and exception messages sent to the application server. If you wish to see any business logic
exceptions, you must select this check box. This option is production friendly.
b. Trigger Hits - When a record is sent to the database to be added, updated, or deleted
(Write/Update/Delete), the framework creates an event in which SQL Server intercepts the call and
performs table specific logic. After this event is processed, the record is sent to the database. Select this
check box to record these trigger events in the server log.
c. Detailed Exceptions - Indicates you want to record the complete details of each exception message.
The full stack trace of the exception is included in the server log. You then see which items in your Epicor
ERP application were affected by the exception. This option is production friendly.
d. ERP DB Hits - Activate this check box to track how the Epicor ERP application interacts with the database.
You can review each database hit as well as how long it took each hit to complete.
e. BPM Logging - Select this check box to record Business Process Management (BPM) method calls. Each
time user activity activates a BPM directive, the application server log records the business object method
that was called and how long this call took to complete. This option is production friendly.
f. BAQ Logging - Select this check box to record Business Activity Query (BAQ) database calls. Each time
user activity activates a BAQ, the application server log records which query was called and how long it
took this BAQ to gather the data results. This option is production friendly.
Tip Remember that the Verbose Logging, Detailed Exceptions, BPM Logging, and BAQ
Logging options are production friendly.
12. Indicate which Advanced Logging information you want to include on the application server log. These
options record calls from the overall system server, and may impact performance while active. Available
options:
a. System DB Hits - Select this check box to record all the hits the database receives from SQL Server. Use
these values to determine the performance of SQL Server.
b. System Table Methods - Activate this check box to track the method calls being placed against the
system tables.
c. SQL Query Detail - Select this check box to have the application server log include the details of the
SQL queries performed by the application.
13. When you finish making your selections, click Apply and then OK.
14. The Server Manager dialog box displays, asking if you want these log settings to activate. If this is a good
time to begin generating results in the application server log, click Yes.
The application server restarts, using the selected trace log options. Your selected trace log settings are written
to the AppServer.config file. When you select a tracing option, you activate the <TraceFlag> setting in this
configuration file, and these settings determine what the application server log records. The AppServer.config
file is located in the DeploymentServer directory.
Important After you gather the system information you need, be sure to return to the Epicor Administration
Console and de-activate your log setting options. This reduces unnecessary calls to the server and improves
performance.
This tracing log records any business logic exceptions internal to the Epicor ERP application. For example, this log
records an error when too many characters are entered in a field, a record already exists, how long it takes a
business object method to run, and so on.
Any errors that occur outside of the internal business logic are not recorded in the application server log. Examples
of items not captured by this log include framework exceptions, security exceptions, fatal errors, and similar
items. You can review these errors through the Event Viewer. You access this Windows tool through a control
panel. Review the Event Viewer topic later in this guide for details on how to launch and use this Windows tool.
Besides capturing server logs, you can also set up your system to automatically place client (UI Trace) logs in the
Results folder or another folder you specify. You can then activate the client log on a user account and the log
files are placed in this directory.
You do this by first modifying some settings in the Epicor.exe.config file. You then activate the client log on the
user account.
1. Using Windows Explorer, navigate to your Epicor ERP client directory. For example:
• C:\Epicor\<YourEpicorVersion>\Client
3. Right-click this file; from the context menu, select Open With.... > Notepad.
The Epicor.exe.config file displays.
5. You next define that directory file location by removing the comments around the <add key="UITrace
FileDefaultDirectory" value="%appdata%\epicor\logs"/> setting. By default, the client
log files upload to the logs folder in application data directory (for example:
C:\Users\<ClientUserName>\AppData\epicor\log); however you can change this value to a shared folder to
set up a central location to capture your client (UI Trace) files.
Tip To make it easy to find these client logs, consider placing them in the same Results folder you
defined within the Performance and Diagnostic Tool.
8. Log into the Epicor ERP application with your security manager account.
10. Use the Detail sheet to find and select the user account.
12. Now from the Log Directory Scheme drop-down list, select the Default from Epicor.exe.config file
option.
This causes the application to automatically place the client logs generated by this user account in the
directory folder you specified.
Client information now automatically generates through this user account. These logs are available in a central
directory file folder location. You can use the Client Trace Analysis features in the Performance and Diagnostic
Tool to review these tracing results; you can also send these files to Epicor Technical Support or your Epicor
consultant for further analysis.
When you run the Log Capture process, you place the server logs into the Results directory you defined on the
Options window.
Note this process creates a backup copy of each log. The original logs are still available in the server directory
location.
2. If your user account does not have access to the network directory, select the Use network credentials
check box. The Performance and Diagnostic Tool then connects to the network through the user account
entered on the Settings window.
3. If you want to compress the server logs, select the Zip Results Folder check box.
During the log capture, the logs contained in the Results folder will automatically compress into a single
.zip file.
4. To filter the logs, use the Server Logs from date field to select a specific date. All server logs generated on
this date or later are moved to the Results folder.
Tip Notice the Results folder you defined displays next to the Server Logs from date field. This
indicates where the logs will be captured; to change this value, click Options > Settings > Log
Capture and enter a different directory.
5. Now use the Main sheet to select the different servers from which you want to capture logs. You do this
by adding rows to the grid.
When you add a server, its row populates using the values you defined on the Settings window. However
you can change these values. Do this when your servers use a different path for the Epicor ERP instance
folder, or when a script requires a different path. To modify the server:
a. Click the Server Type drop-down list to select what kind of server contains the logs you need to capture.
You can select from the Application Server, Report Server, Database Server, and All options.
b. Now enter the ServerName. This value defines the specific name for the server from which you will
capture the logs.
c. The Connection Status field indicates whether the Performance and Diagnostic Tool can connect with
the server. After you click the Connection Status button, this column displays the results; if the connection
works, an OK value displays in this column.
If you cannot communicate with the server, access your Windows Control Panels to verify your network
is linked to your computer and that you have entered the correct ServerName.
d. Select the Run check box to indicate that the logs should be copied from the current server.
6. Now if a script is available to run against the current server, the Script grid displays below the server row.
You need to decide whether you want to run the script:
c. Run - Select this check box to cause the script to activate when you capture the logs.
7. Click the Connection Status button to verify whether the Performance and Diagnostic Tool can connect
to each server selected on the Main grid. Each grid row updates with the verification results. If the tool can
connect to the server, the Connection Status column displays an OK value.
8. The default PowerShell scripts are required to run the log capture process. If you modified or deleted the
PowerShell scripts, click the Restore Scripts button. The scripts regenerate using their default values. These
script files are usually located in the C:\Users\<UserName>\AppData\Roaming\Epicor
Software\E10PDT\PS1 directory.
9. After you finish selecting your options, click the Capture Logs button.
The log capture process runs.
10. To check on the log capture, click the Capture Progress sheet.
This sheet details the capture process and indicates when this process is complete.
The server logs copy to the central Results folder. You can now use the Client Trace Analysis and Server
Diagnostics features in the Performance and Diagnostic Tool to evaluate the captured logs.
Fields
This topic documents the fields you define or review to capture server logs.
Capture Logs
Click this button to launch the Capture Logs process. This process moves the server logs you selected to the
Results directory.
Capture Progress
Details the capture process. As the log capture process runs, the Performance and Diagnostic Tool selects logs
from the selected application servers, database servers, and reports servers, copying them to the Results folder.
You can watch the progress on this field; it then indicates when the process is complete.
Connection Status
Click this button to verify whether the Performance and Diagnostic Tool can connect to each server selected
on the Main grid. Each grid row updates with the verification results. If the tool can connect to the server, the
Connection Status column displays an OK value.
Restore Scripts
The default PowerShell scripts are required to run the log capture process. If you modified or deleted the PowerShell
scripts, click this button. The scripts regenerate using their default values. These script files are usually located in
the C:\Users\<UserName>\AppData\Roaming\Epicor Software\E10PDT\PS1 directory.
Run (Script)
Select this check box to cause the script to activate when you capture the logs.
Run (Server)
Select this check box to indicate that logs should be copied from the current server.
ScriptDescription
Contains a brief explanation about the purpose of the script.
ScriptName
Displays the name of the script you can run.
Server Type
Click this drop-down list to select what kind of server contains the logs you need to capture. Select from the
Application Server, Report Server, Database Server, and All options.
ServerName
Now enter the ServerName. This value defines the specific name for the server from which you will capture the
logs.
Global Options
Use the Settings > Global Options sheet to enable viewing of the source code location of the stack traces.
Before you can view the source location, you can define some options for the source control.
3. Select the Source Location for your source files. The following options available:
• TFS
• Local Path
4. From the Text Editor drop-down list select the tool you use to edit code.
5. Use the Local Path field to specify the path to your source files .
The field is only available when you select the Local Path option from the Source Location drop-down list
6. Use the TFS Options grid to enter details of your Team Foundation Server such as Team Project Collection,
TFS Path, Domain, User Name and Password.
Once you have defined the global options for the source code, you can view the source code location for the
selected stack trace.
2. Click the Show more details option to display all the available information for each call in a new sheet.
The sheet displays the current row and it's child rows to be analyzed individually and lists BPM, BAQ,
Exceptions or other entries as part of the current row, if any.
3. Navigate to the SQL Hits sheet to review SQL information for the current row.
This sheet displays each SQL query that the current row executed alongside its arguments and the values
sent to SQL Server by Epicor. It also displays the stack trace of each of the SQL calls so that your Epicor
support person can aid the Epicor Development team pin point where was a query executed, as it shows
the time that each query took to execute.
To display SQL information for a row, ensure the following trace flags enabled in your server configuration:
• § profile://system/db/hits
• § profile://system/db/epiprovider
• § profile://system/db/stacktrace
Important If you enable the above trace flags, the application server may become unresponsive and perform
slower than expected. It is recommended to enable them for brief periods of time or in a test server or any
other test environment.
4. To view the source location, right-click a stack trace and select Open Source Location.
The selected text editor displays the code location.
You can also run the Performance and Diagnostic Tool (PDT) through a command line interface. Use this option
to run a performance test either immediately through a saved command line or .bat file; you can also launch the
PDT automatically when a performance issue occurs.
To run the command line version of the Performance and Diagnostic Tool, you need to launch the program with
a configuration file. This configuration file can contain one or multiple tasks that you wish to run. The command
line version contains all the functionality of the Performance and Diagnostic Tool except for the Log Capture
feature; these functionality may be included in a future release.
® ® ®
This command line tool also directly interfaces with both the Microsoft Task Scheduler and Microsoft
®
Performance Monitor , so you can use these tools together to monitor performance. To do this, you set up a
counter that contains a stop condition. When the system passes the threshold that triggers the stop condition,
it actives the command line PDT. The command line tool can then record performance data and/or system
information.
Dependencies
Before you can run the Performance and Diagnostic Tool from a command line, you must install the Performance
and Diagnostic Tool in your environment. You will then run the command line version of this tool from this
directory path.
Executable
Do the following to launch the Performance and Diagnostic Tool from the command line.
Using your prompt on the Command or PowerShell window, navigate to the directory where you installed the
Performance and Diagnostic Tool.
Tip If you are not sure where this directory is located, right-click the Performance and Diagnostic Tool
on your desktop; from the context menu, select Properties. Review the path that displays in the Target
field.
Actions
This topic describes the action you can run with the Performance and Diagnostic Tool (PDT).
This tool measures performance through different testing features contained in the PDT, so you must define
what specific actions run within your configuration files. Because of this, you primarily run one action directly
from the Command or PowerShell command line:
• /cmdline or -cmdline - Use this action to cause the Performance and Diagnostic Tool to run the tasks
defined in a specific configuration file. For example: -cmdline "C:\PDTAutomatedScripts\Execut
eLiveMemory.xml"
The base configuration file template for the Performance and Diagnostic Tool is below. When you create a file,
use this primary node to set up a file that contains the performance and diagnostic functions you need.
When you launch the Performance and Diagnostic Tool with this file, all the tasks defined within this configuration
file automatically run.
<?xml version="1.0" encoding="utf-8" ?>
<PDT ParallelizeRun="true|false; optional argument, false by default" logFile="
PathToLogFile;optional, sets the path for the global PDT command line, by defau
lt the path {PDTTemporaryItemsPath}\PDTAutomation.txt is selected if empty">
<recipe name="Unique Name; to be displayed at the log file and cmd output" st
opOnActionFailure="true|false; optional argument, false by default">
<actions>
<action name="Unique Name; to be displayed at the log file and cmd output
" description="Unique Name; to be displayed at the log file and cmd output" val
idate="true|false; optional, runs the ValidatePlugin method if found, by defaul
t true" altValidationMethodName="NameOfThePublicMethod; optional, default value
string.Empty">
<program pluginName="Server Diagnostics|Network Diagnostics|Live Memory
inspection|Config Check|Client Trace Analysis|Log Capture;Display Name (user f
riendly) of the PDT plugin, plugin used to execute the method"/>
<method name="AnyPublicMethod; method of the public method to execute,
has to be an instance method as the plugin is spun up to execute the method req
uested">
<parameters>
<parameter name="ParameterName;optional, name of the parameter to s
end" value="value of the parameter; string representation of the value to send,
for null leave empty, optional parameters method(string str="DefaultValue
") should be left empty and should always be added as parameters even if n
ot being set" type="string"/>
</parameters>
</method>
</action>
</actions>
</recipe>
</PDT>
Notice you can set up a configuration file to execute several tuning and diagnostic functions. You nest these
functions within parent tags to complete the tasks you want to handle through a single configuration file. To do
this, you set up multiple <recipe> tags within the parent <PDT> tag. These tasks can also be design to run at
the same time, so you can record different aspects of a performance or system configuration issue.
Example Files
The topic illustrates how you set configuration files for the Performance and Diagnostic Tool. These configuration
files can perform one or multiple tasks.
</PDT>
Combined
The following configuration file example runs several performance tuning and diagnostic tasks. Notice each task
is contained within a <recipe> tag. This script runs the following tasks:
• Generates server logs and then exports them to the C:\DB directory path.
• Generates client (UI Trace) logs and exports them to the C:\DB directory path.
• Inspects the memory usage on the system.
• Runs a Configuration Check.
• Tests network performance.
<?xml version="1.0" encoding="utf-8" ?>
<PDT>
</parameters>
</method>
</action>
</actions>
</recipe>
<recipe name="Process and Export ServerLogs">
<actions>
<action name="ProcessAndExportFiles" description="Add serverLog files to
process only the valid ones save to Excel file (xlsx)">
<program pluginName="Server Diagnostics"/>
<method name="ProcessAndExportServerLogFiles">
<parameters>
<parameter name="LogFilesSeparatedByComma" value="C:\DB\ServerLog20
14-07-02T18-18-47.txt" type="string"/>
<parameter name="OutputLocation" value="C:\DB\ServerLog2014-07-02T1
8-18-47.xlsx" type="string"/>
</parameters>
</method>
</action>
</actions>
</recipe>
You can download the example configuration files that display on the previous topic and other example
configuration files. You download these files through the Performance and Diagnostic Tool.
After you download these example files, you can then use them as base templates for your own configuration
files. To do this:
3. Scroll through the download options to locate the PDT Command line automation examples row.
6. Navigate to the directory path where you want to install these files.
The default path:
• C:\Users\[YourUserName]\Documents
7. Click Save.
8. Using your explorer, navigate to the directory path where you downloaded your sample files.
You can then display these sample .xml files in Notepad or a similar text editor.
Rather than point to a configuration file in a specific directory path, you can instead embed the configuration
file directly into the command line.
This feature is useful if you want to create a batch .bat file that contains all the information you need to run an
instance of the PDT. You can then reuse the .bat file and modify the actions and parameters as you need. To do
this:
1. First create the configuration .xml file so it contains the actions you want to run.
2. Now remove all the line spaces (line feed characters) from this file; the parameters are then placed within
one continuous line.
5. Now add the –cmdline action to the command line, but instead of the directory path, enter “inline;<xml
document>”.
6. Paste the single line .xml file behind the -cmdline “inline;<xml document>” action. For example:
Performance and Diagnostic Tool.exe -cmdline “inline;<xml document>” <?xml
version=""""1.0"" encoding=""utf-8"" ?><PDT><recipe name=""Process and Expo
rt ServerLogs""><actions><action name=""ProcessAndExportFiles"" description
=""Add serverLog files to process only the valid ones save to Excel file (x
lsx)""><program pluginName=""Server Diagnostics""/><method name=""ProcessAn
dExportServerLogFiles""><parameters><parameter name=""LogFilesSeparatedByCo
mma"" value=""C:\DB\ServerLog RMG10APP02SAC.txt"" type=""string""/><paramet
er name=""OutputLocation"" value=""C:\DB\ServerLog.xlsx"" type=""string""/>
</parameters></method></action></actions></recipe></PDT>
When you run this command line, any parameters and actions you have set up in the single line .xml file run as
well.
Setup Values
Your configuration .xml files for the Performance Tuning and Diagnostic Tool can contain the following setup
values.
You first use the parent settings tags to determine the groups of tasks that activate when you run this
configuration file. You then activate the specific PDT programs that run through the configuration file. Lastly,
you must define the methods and their parameters that run for each program.
Parent Settings
You first design your configuration file through a series of parent setting nodes. These nodes define the groups
of actions that activate when you launch the Performance and Diagnostic Tool with this configuration file.
Available parent settings:
PDT
The primary parent tag, all <recipe> tags must be nested inside the parent <PDT> tag. While the <PDT> tag
can be empty, you can also include the following optional parameters:
• logFile - Overrides the default log to generate a separate log for the tasks contained in the <PDT> tag. Enter
the directory and file name in this field. For example: logFile="C:\MyPDTLogs\MyPDTLog
You can only have one <PDT> tags within each configuration file.
Recipe
Defines a group of actions that run at the same time. Within the log file, these tasks group together under the
name you enter for the recipe tag. The <recipe> name also displays within the window after you launch the
command line. Available parameters:
• name - Defines the unique name for the recipe of tasks that run. This name displays on the log. For example:
name="ConfigCheckBatonRougeServer"
• stopOnActionFailure - An optional parameter, this value indicates whether all tasks (actions) contained
within the <recipe> tag quit running when one action fails to complete. The expected values are true or
false. The default value is false. For example: stopOnActionFailure="true"
Actions
Contains all the actions that will run when the recipe executes. You place specific <action> tags inside the parent
<actions> tag. This tag does not contain additional parameters.
Action
Defines the name of each specific action that will run when the <recipe> tag activates. Within the log file, each
action is listed separately using the name you define in this tag. You can also validate the action to insure it
executes correctly. Available parameters:
• name - Defines the unique name for the action that runs. This name displays on the log. For example: name
="ProcessAndExportFiles"
• description - Contains a brief explains for the purpose of the action. Enter a value in this parameter to define
the purpose of the action. For example: description="Execute live memory inspection and
export to excel"
• validate - Indicates whether the action validates as it runs. If active, the ValidatePlugin method runs against
the action. The expected values are true or false. The default value is true. For example: ValidatePlugi
n="true"
• altValidationMethodName - If you wish to run a different validation method against the action, enter this
method in this optional parameter. The default value is string.Empty, which indicates you will use the
ValidatePlugin method. For example: altValidationMethodName="string.MyMethod"
Programs
The <program> tag indicates which plugin runs when you activate the Performance and Diagnostic Tool from
the command line.
Each <program> tag must have a pluginName parameter. This value determines what plugin program from the
Performance and Diagnostic Tool runs when the configuration file activates. You can also enter a description
parameter to help you indicate why you are running the plugIn. This tag uses the following example syntax:
• <program pluginName="Server Diagnostics" />
Each plugin then contains one or more methods; these methods run the performance tuning or diagnostic test.
This section documents each program. The next section describes the specific methods you can run within each
program (plugin).
Config Check
The Performance and Diagnostic Tool contains a utility to check the configuration of the application server. Use
this Config Check option to see what issues and potential issues you may have with the application server
configuration.
This feature reviews a number of configuration items, including CPU speed and standard metrics. After the tool
analyzes the configuration, you can display these results in the Performance and Diagnostic Tool. This feature
displays recommended actions you can follow to fix various issues.
To activate this plugin:
• <program pluginName="Config Check" />
Log Capture
This feature automatically collects server logs, all matching .config files (based on the network path), the web.config
file from the Epicor ERP folder, event logs, the SQL Server error log (SQL ERRORLOG). It can also run the
configuration check.
When you are ready to analyze server performance in the Performance and Diagnostic Tool, navigate to this
directory path to select the logs you want to review. You can also compress these files into an archive file and
then send this archive to Epicor Technical Support or your consultant for further analysis.
Important This feature is currently not available for the command line Performance and Diagnostic Tool.
It will be available in a future release.
Network Diagnostics
You can use the Performance and Diagnostic Tool to verify the baseline network and server performance are
running at optimal levels. Use this feature to evaluate the Network Test standard metric.
To do this, run multiple tests to gauge the overall performance of your network, and compare these results
against the network standard metric. You can also run this same test on client installations to compare the client
results against the server results. If there is latency on the client network, you will find variations in these test
results.
To activate this plugin:
• <program pluginName="Network Diagnostics" />
Server Diagnostics
You analyze the performance of server installations through server tracing logs. This section of the guide describes
how you set up these logs and analyze the results.
To activate this plugin:
• <program pluginName="Server Diagnostics" />
Important To begin evaluating performance, you should always turn on both the client and server tracing
logs. These logs will help your team, Epicor consulting and support professionals, and network professionals
determine the cause of the performance issue(s). By having a series of tracing logs available, both you and
Epicor specialists can more throughly evaluate and resolved your performance issue(s).
This topic describes each method that you can run through each program plugin. It describes what task each
method runs and details the optional/required parameters for each method.
CopyAllEventlogs
Plugin program:
• Log Capture
Use this method to copy all relevant event logs to Epicor ERP event log files.
Parameters:
• AppserverNameSeparatedBySemicolon - Defines the application server. For example: <parameter na
me=" AppserverNameSeparatedBySemicolon" value="2016Pool" type="string"/>
• AltRunName - Indicates the alternative run name. For example: <parameter name="AltRunName" va
lue="" type="string"/>
• AltResultsPath - Defines the directory path and file name for the exported log. A string value, use this
parameter to place the the log file in any directory path you need. For example: <parameter name="Alt
ResultsPath" value="C:\DB\EventLog.log" type="string"/>.
• AltWindowsUserName - Defines a alternative windows user name.
• AltWindowsUserPassword - Use this parameter to define alternative user password. For example: <para
meter name="AltWindowsUserPassword" value="epicor123" type="string"/>
ExecuteLiveMemoryInspection
Plugin program:
• Live Memory Inspection
Use this method to run a memory inspection against either a selected Process ID or any process which matches
the ProcessNameContains and AppPoolName values. The memory inspection results are then exported into
a Microsoft Excel (.xlsx) file.
Tip If you run this action automatically from a Performance Monitor counter, a series of memory inspection
files are created using increasing incremental numbers. A new file is created each time the process or
processes run; you can then gather a history of how this process performs.
Parameters:
• AppPoolName - Defines the application pool against which you are running the memory test. This value is
used with the ProcessNameContains parameter to determine which processes are included in the memory
test. You can then monitor the selected processes in the application pool for a long period of time. For example:
<parameter name="AppPoolName" value="2016Pool" type="string"/>
• MemoryTrace - Indicates whether you want to run a memory trace against the current environment. This
test generates a list of objects stored in memory while the .NET process ran, displaying both a count of how
many objects of the same type were activated as well as the size (in kilobytes) of each object. The available
options are true and false. This parameter is a bool type. For example: <parameter name="MemoryTr
ace" value="true" type="bool"/>
Tip Only use this memory test option when you need, as the memory trace is more time consuming
to run and will cause the Epicor ERP application to freeze until it completes the test. Typically you should
not run a memory trace in your live environment.
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value,
® ®
use this parameter to place the Microsoft Excel spreadsheet in any directory path you need. For example:
<parameter name="OutputLocation" value="C:\DB\LiveMemoryInspection.xlsx" type
="string"/>
• ProcessID - Defines a specific process against which you are running the memory test. If you will run this
memory test against multiple processes, do not enter a value in this parameter. Instead leave this value blank
and enter the range of processes you are testing in the ProcessNameContains and AppPoolName parameters.
• ProcessNameContains - Use this parameter to analyze one or multiple processes. Any process whose name
matches this value are included in the memory test. If you enter a value in this parameter, do not enter a
value in the ProcessID parameter. For example: <parameter name="ProcessNameContains" value
="w3wp.exe" type="string"/>
• StackTrace - Indicates whether you want to run a stack trace against the current environment. A stack trace
generates a review of the .NET process flow, and it details a list that starts from the last method to the first
method called within each thread. Note that a stack trace runs quickly (only 500-1000) milliseconds), so you
can run this test in a live environment. The available options are true and false. This parameter is a bool
type. For example: <parameter name="StackTrace" value="true" type="bool"/>
ExecuteLiveMemoryInspectionFromDump
Plugin program:
• Live Memory Inspection
Use this method to run a memory inspection against a Memory Dump file you created through either the
Performance and Diagnostic Tool or the Windows Task Manager. You can generate a stack trace and/or a memory
trace against these results.
• DacPath - An optional value, this parameter indicates the path for the MSCORDACKS file for the computer
from which you generated the memory dump file. For example: C:\temp\Capricorn\CSL_20160622_
MemoryDump_csl-app-erptsk1\mscordacwks_amd64_amd64_4.0.30319.34209.dll
• DumpFilePath - Defines the directory path and name for the memory dump fil you need to analyze. The
memory trace and/or stack trace results are then generated from this source file. For example: C:\temp\Ca
pricorn\CSL_20160622_MemoryDump_csl-app-erptsk1\w3wp (3).DMP
• MemoryTrace - Indicates whether you want to run a memory trace against the current environment. This
test generates a list of objects stored in memory while the .NET process ran, displaying both a count of how
many objects of the same type were activated as well as the size (in kilobytes) of each object. The available
options are true and false. This parameter is a bool type. For example: <parameter name="MemoryTr
ace" value="true" type="bool"/>
Tip Only use this memory test option when you need, as the memory trace is more time consuming
to run and will cause the Epicor ERP application to freeze until it completes the test. Typically you should
not run a memory trace in your live environment.
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value,
® ®
use this parameter to place the Microsoft Excel spreadsheet in any directory path you need. For example:
<parameter name="OutputLocation" value="C:\DB\LiveMemoryInspection.xlsx" type
="string"/>
• StackTrace - Indicates whether you want to run a stack trace against the current environment. A stack trace
generates a review of the .NET process flow, and it details a list that starts from the last method to the first
method called within each thread. Note that a stack trace runs quickly (only 500-1000) milliseconds), so you
can run this test in a live environment. The available options are true and false. This parameter is a bool
type. For example: <parameter name="StackTrace" value="true" type="bool"/>
LoadSysConfigFile
Plugin program:
• Config Check
• Network Diagnostics
Activate this key method to load a .sysconfig file into the Performance and Diagnostic Tool. This .sysconfig file
is used to launch and run the test Epicor ERP environment. You must load in the .sysconfig file for the Network
Diagnostics and Configuration Check tests.
Parameters:
• SysConfigFilePath - Defines the directory path and location of the system configuration (.sysconfig) file for
the Epicor ERP environment you are testing. This file must contain the <UserID> and <Password> for the
account that logs into the Epicor application. For example: <parameter name="SysConfigFilePath"
value="C:\Temp\2012RTest\Deployment\Client\config\localPDT.sysconfig" type="s
tring"/>
ProcessAndExportUITraceFile
Plugin program:
• Client Trace Analysis
Run this method to first activate a client trace (UI Trace) file. You then export the file to a directory path you
define and then save the resulting file to a Microsoft Excel .xlsx spreadsheet.
Parameters:
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value,
® ®
use this parameter to place the Microsoft Excel spreadsheet in any directory path you need. For example:
<parameter name="OutputLocation" value="C:\DB\TraceData7848_1.xlsx" type="str
ing"/>
• UITraceFilePaths - If you wish to generate and export multiple client trace files, enter these multiple paths
in this parameter. Use a semicolon to separate multiple directory paths and files. For example: <parameter
name="UITraceFilePaths" value="C:\DB\TraceData7848.txt;C:\DB\TraceData7848_10
1500.txt" type="string"/>
ProcessParseAndExportToDatabaseAndExcel
Plugin program:
• Client Trace Analysis
Run this method to first activate a client trace (UI Trace) file. You then export the file to a Microsoft Excel .xlsx
spreadsheet and SQL database.
Parameters:
• ExportPath - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value, use
® ®
this parameter to place the Microsoft Excel spreadsheet in any directory path you need. For example: <pa
rameter name="OutputLocation" value="C:\DB\TraceData7848_1.xlsx" type="string
"/>
• UITraceFilePath - If you wish to generate and export multiple client trace files, enter these multiple paths in
this parameter. Use a semicolon to separate multiple directory paths and files. For example: <parameter
name="UITraceFilePath" value="C:\DB\TraceData7848.txt;C:\DB\TraceData7848_101
500.txt" type="string"/>
ProcessAndSaveIntoSQLUITraceFile
Plugin program:
• Client Trace Analysis
Run this method to first activate a server trace file or files. You then export the file(s) to a directory path you
define. Each server trace file is transformed into both a Microsoft Excel file (.xlsx) and a .csv file.
Parameters:
• UITraceFilePaths - If you wish to generate and export multiple client trace files, enter these multiple paths
in this parameter. Use a semicolon to separate multiple directory paths and files. For example: <parameter
name="UITraceFilePaths" value="C:\DB\TraceData7848.txt;C:\DB\TraceData7848_10
1500.txt" type="string"/>
• RunDescription - Defines the description. For example: <parameter name="RunDescription" val
ue="Description" type="string"/>
ProcessAndExportServerLogFiles
Plugin program:
• Server Diagnostics
Run this method to first activate a server trace file or files. You then export the file(s) to a directory path you
define. Each server trace file is transformed into both a Microsoft Excel file (.xlsx) and a .csv file.
Parameters:
• LogFilesSeparatedByComma - If you wish to generate and export multiple server trace files, enter these
multiple paths in this parameter. Use a semicolon to separate multiple directory paths and files. For example:
<parameter name="LogFilesSeparatedByComma" value="C:\DB\ServerLog RMG10APP02S
AC.txt;C:\DB\RMG10APP02SDC.txt" type="string"/>
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet and .csv file.
A string value, use this parameter to place these files in any directory path you need. For example: <parame
ter name="OutputLocation" value="C:\DB\ServerLog RMG10APP02SAC.xlsx" type="st
ring"/>
ProcessAndSaveToSQLServerLogFiles
Plugin program:
• Server Diagnostics
Run this method to first activate a server trace file or files. You then export the file(s) to SQL.
Parameters:
• LogFilesSeparatedByComma - If you wish to generate and export multiple server trace files, enter these
multiple paths in this parameter. Use a semicolon to separate multiple directory paths and files. For example:
<parameter name="LogFilesSeparatedByComma" value="C:\DB\ServerLog RMG10APP02S
AC.txt;C:\DB\RMG10APP02SDC.txt" type="string"/>
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet and .csv file.
A string value, use this parameter to place these files in any directory path you need. For example: <parame
ter name="OutputLocation" value="C:\DB\ServerLog RMG10APP02SAC.xlsx" type="st
ring"/>
• RunDescription - Defines the file description for the exported server trace file. For example: <parameter
name="RunDescription" value="description" type="string"/>
ProcessAndSaveToSQLAndExcelServerLogFiles
Plugin program:
• Server Diagnostics
Run this method to first activate a server trace file or files. You then export the file(s) to a directory path you
define and a SQL Database. Each server trace file is transformed into both a Microsoft Excel file (.xlsx) and a .csv
file.
Parameters:
• LogFilesSeparatedByComma - If you wish to generate and export multiple server trace files, enter these
multiple paths in this parameter. Use a semicolon to separate multiple directory paths and files. For example:
<parameter name="LogFilesSeparatedByComma" value="C:\DB\ServerLog RMG10APP02S
AC.txt;C:\DB\RMG10APP02SDC.txt" type="string"/>
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet and .csv file.
A string value, use this parameter to place these files in any directory path you need. For example: <parame
ter name="OutputLocation" value="C:\DB\ServerLog RMG10APP02SAC.xlsx" type="st
ring"/>
RunConfigCheckAndExport
Plugin program:
• Config Check
Use this method to run a configuration check against the system. After you run the configuration test, you then
® ®
export this file using the Microsoft Excel .xlsx file format; this spreadsheet contains all the results from
configuration check.
Parameters:
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value,
use this parameter to place the spreadsheet in any directory path you need. For example: <parameter na
me="OutputLocation" value="C:\DB\ConfigCheck.xlsx" type="string"/>
RunConfigCheckAndExportToExcel
Plugin program:
• Config Check
Runs this method on the configured Epicor ERP 10 server and export it into a Microsoft Excel file (.xls) file.
Parameters:
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value,
use this parameter to place the spreadsheet in any directory path you need. For example: <parameter na
me="OutputLocation" value="C:\DB\ConfigCheck.xlsx" type="string"/>
RunConfigCheckAndSaveToSQL
Plugin program:
• Config Check
Runs this method on the configured Epicor ERP 10 server and export it into the SQL database.
Parameters:
• AltDescription - Defines the alternative description.
• AltConnectionString - Defines connection string where the contents are saved.
• LoadTestRunId - ExportRun.LoadTestRunId field information that identifies your execution.
RunConfigCheckAndSaveToSQLAndExcel
Plugin program:
• Config Check
Runs this method on the configured Epicor ERP 10 server and export it into the SQL database and a Microsoft
Excel file (.xls) file.
Parameters:
• OutputLocation - Defines the directory path where the (.xls) file are saved.
• AltDescription - Defines the alternative description for the ExportRun.ExportDescription field.
• AltConnectionString - Defines connection string where the contents are saved.
• LoadTestRunId - ExportRun.LoadTestRunId field information that identifies your execution.
RunNetWorkDiagnosticsAndExport
Plugin program:
• Network Diagnostics
Use this method to first launch a network diagnostic test against the current server. You then export the file to
a directory path you define, saving the resulting file to a Microsoft Excel .xlsx spreadsheet.
Parameters:
• OutputLocation - Defines the directory path and file name for the exported .xlsx spreadsheet. A string value,
use this parameter to place the spreadsheet in any directory path you need. For example: <parameter na
me="OutputLocation" value="C:\DB\NetworkDiagnostics.xlsx" type="string"/>
While you can set up a custom log file location, you can also generate the default log for the Performance and
Diagnostic Tool.
If you do not enter any logging values in your configuration files, the Performance and Diagnostic Tool generates
in the PDT Temporary Items folder. This folder is typically located in the following directory path:
• C:\Users\<YourUserName>\AppData\Roaming\Epicor Software\E10PDT
The default file name:
• PDTAutomation.log
Tip You can find the location of this folder within the Performance and Diagnostic Tool. From the menu,
click Help > About. The About Performance and Diagnostic Tool window displays. Click the Open
PDT Temporary Items Location button.
You can automatically launch the Performance and Diagnostic Tool from a counter within the Performance
Monitor.
Through this key command line feature, you set up a threshold value. When the live or test environment passes
this threshold value, the Performance and Diagnostic Tool automatically activates using a specific configuration
file. You do this by defining the command line on the performance counter.
For example, you can use this feature to define a critical CPU usage level. When the system reaches this critical
level, you can activate the PDT to run a live memory inspection and generate a Microsoft Excel file. You can then
review what caused this spike in CPU usage.
Create Task
You begin by creating a task within the Windows Task Scheduler. This task launches the command line Performance
and Diagnostic Tool (PDT).
Do the following steps on your server machine:
3. The Task Scheduler displays as one of the options; select this icon.
The Task Scheduler window launches.
6. Now in the Description field, enter Launches a memory inspection from the Performance Monitor.
7. If you need to run this task with a different user account, click the Change User or Group... button.
8. Select the Run whether user is logged in or not radio button option. This task will then run whenever
the Performance Monitor trigger activates it.
Add Actions
You will add two actions. One action will send an email alert when the Performance Monitor activates the PDT,
the other action will run the performance test.
3. From the Action drop-down list, select the Send an email option.
5. Now in the To field, enter your email address and the email addresses for other people you would like to
receive your alert. Separate each address with a semi-colon (;).
6. Enter the Subject for the email alert. For example: Live Memory Inspection Activated
7. Now enter the Text you want displayed in the alert message.
8. Indicate the SMTP server that will send out the email alert.
The New Action window will look similar to the following illustration:
9. Click OK.
You return to the Actions tab.
11. From the Action drop-down list, select the Start a program option.
13. Now in the Add arguments (optional) field, enter the available PDT arguments. You include these arguments
so you launch the full functionality of the PDT. For example:
• -cmdLine "C:\DB\ExecuteLiveMemory.xml"
The New Action window will look similar to the following illustration:
You now create a data collection set in the Performance Monitor that monitors the memory usage for your CPU.
When it passes a threshold you define, it triggers the PDT task.
Do the following steps on your server machine:
1. From the Windows desktop, either click Start > Run or launch the PowerShell.
3. From the tree view, expand the Data Collector Sets node.
4. Right-click the User Defined node; from the context menu, select New > Data Collector Set.
The Create new Data Collector Set Wizard displays.
7. Click Next.
8. Now for the data type you want to include, select the Performance Counter Alert radio button option.
9. Click Next.
10. Select the performance counters that activate this alert. Click the Add... button.
A list of available counters appears. These counters are grouped by type.
13. Now in the Alert when: drop-down list, select Above; for the Limit value enter 10.
This indicates when this alert takes longer than 10 seconds to process, it activates.
14. Enter a Sample interval. This value adds a time gap, preventing the Performance Monitor from launching
multiple instances of the PDT at the same time. For the Sample interval and Units, enter 15 seconds.
Your window resembles the following:
15. Now define which task runs when the alert activates. Click on the Alert Task tab.
16. In the Run this task when an alert is triggered field, enter the task you created in the Task Scheduler.
For this example, you enter the Launch PDT live memory inspection task.
Your window looks like the following illustration:
To complete the setup, define the stop condition which causes this data collector set to stop and the task that
runs after the data collection set stops.
2. Select the Restart the data collector set at limits check box.
3. Click OK.
The performance counter is now active. When the system passes the threshold value you defined on the
performance counter, the Performance and Diagnostic Tool activates. It runs using the configuration file you
defined on the Windows task.
The PDT will start collecting data. This data is generated into a series of .xslx files. Each file uses the
[ProcessID]_[AppPoolName, if available]_[TaskName].xlsx format. For example:
This spreadsheet includes information on the CPU utilization. It also generates additional information if you use
the -dev action as the Roots and Objects tabs are included in the generated files.
If you are having performance issues, you can export your log files into a SQL database where you can prepare
queries, views, sprocs, and data mine the log files in a very easy way. The main benefit of this mode is that it
skips entirely the infragistics components and goes directly into SQL by bulk exporting all of the data to it.
When you export several files into the same SQL database, PDT creates a separate identifier in the database for
each exported file (ExportID) that can be found in all of the exported tables, which you can then use to compare
data and other data processing tasks.
You can automate this workflow using a simple command line batch file. Next are a couple of examples where
you can skip the complete PDT UI.
Example Files
The topic illustrates how you automate the export of your log files into a SQL database using a simple command
line batch file. This mode has even smaller memory footprint than the export to SQL alone, as none of the UI is
rendered and only the processing bits are loaded.
Note Every parameter requires that you specify double quotes around them, even for integer arguments.
You can also make the PDT run in the background without showing anything by adding the nocommandline
argument (this is an equivalent of the silent option in many other applications).
Example
"Performance and Diagnostic Tool.exe" cmdline inline nocommandline "Server
Diagnostics" "ProcessAndExportServerLogFiles" "C:\temp\PDT\Tobi\Export\Serv
er\ServerLog2.txt;C:\temp\PDT\Tobi\Export\Server\ServerLog.txt" "C:\temp\PD
T\Tobi\Export\Server\Export.xlsx"
PDT uses any command prompt it finds in its way to output its log contents, or start a new one if it finds none
so you'll always know what happened.
System Evaluation
The following diagrams can further help you evaluate your system. You can also use the Performance Report to
get a complete picture of your current system.
Infrastructure Diagram
After you have gathered standard metrics and other performance information from the system, work with your
Epicor representative to create a diagram of your system's physical infrastructure.
This Infrastructure Diagram illustrates the computer hardware used to run the network. It displays the
specifications of each piece of computer hardware and it also illustrates the connections between the hardware
infrastructure. Be sure to include both the specifications for the physical boxes as well as the virtual machines
running over the network.
Minimum details to include on the physical infrastructure diagram:
• Server details like server, server name, and server processor.
• Other hardware specifications like RAM, disk, network, OS, VMware details, and so on.
• All servers participating in an Epicor instance including test system, pilot system, and other auxiliary systems.
• If the infrastructure is a VMware instance, document what virtual machines are running with which name on
which server.
• If the infrastructure is a VMWare instance, document virtual machine specifications that both are on or can
be powered on.
• Other technical details to document include Switches, SAN details, and F5 (if present).
• Infrastructure locations (be sure to mention if a large number of users connect over a WAN).
This diagram can be created using a number of different styles and graphics. You can create these diagrams
using Microsoft® Visio® or a similar 2D-object drawing application. The following illustration is provided as an
example; use a diagram style that works best for you.
Instance Diagram
You should also work with your Epicor representative to create a logical, or instance diagram that displays the
machines on which the Epicor ERP application is installed. This Instance Diagram needs to display the specifications
for each client and server machine.
Minimum details to include on the instance (logical) diagram:
• The physical machine and logical machine used for each Epicor component.
• The number of application server boxes and instances.
• Users and their locations; be sure to group these users together. For example, group together local users and
remote WAN users and indicate their locations.
• Other information like load balancing details, database details, database type, and machine names.
• Which component runs which part on local drives versus SAN drives.
This diagram can also be created using a number of different styles and graphics. You can create these diagrams
using Microsoft Visio or a similar 2D-object drawing application. The next illustration gives you an example;
develop a diagram style that works best for you.
Performance Report
Epicor representatives finish evaluating your system by creating a Performance report. This report summarizes
the information gathered through the standard metrics, additional performance tests, and the diagrams.
Although each performance report will be unique for each customer, the following list contains some suggested
sections the report should contain:
• Performance Metrics -- Create a table that compares the Epicor standard metrics against the customer's
performance metrics. This section provides a quick summary of the various performance issues and provides
some possible solutions to address these issues.
• Scalability Issues -- This section defines issues the customer organization may face to expand their use of
the Epicor ERP application. This section of the report encapsulates the information gathered through the
Infrastructure Diagram and the Instance Diagram.
• Specific Performance Issues -- Use this section to document specific performance issues. Any process, entry
program, report, and so on that has slow performance needs to be documented. This area of the report
contains a table that records how long it takes to run these processes under various conditions.
• Appendix -- Place the Infrastructure Diagram and the Instance Diagram in this section.
System Tuning
® ®
This section describes some performance tuning options for Microsoft SQL Server , your database, and the
overall system.
You can review some database properties to make sure SQL Server is running at optimal performance. You view
these properties in Microsoft SQL Server Management Studio.
2. From the tree view, right click on the SQL Server icon and select Properties.
The Server Properties window displays.
Database Properties
You can review some database properties to make sure SQL Server is running at optimal performance. You view
these properties in Microsoft SQL Server Management Studio.
2. From the Tree View, expand the Databases node, right click on a database icon, and select Properties.
The Database Properties window displays.
3. Parameterization - This database property needs to be set to Forced. To locate this property in the Database
Properties window, select the Options node.
4. Recovery Model -- Verify the Recovery model property is set to Full. Now when you need to recover data,
the transactions are stored in two locations. Once you define this property, you need to schedule transaction
log backups; this reduces the transaction log (xxx.ldf) file size.
5. Database Size - Be sure to increase, or "pre-grow" the database to the size you estimate it will be in a
year. To do this, select the Files node. Enter this pre-growth value in the Initial Size column. The file growth
size should be by percentage; typically you should set this value to 10%.
6. Log Locations -- The database log (for example Epicor10.mdf) and the transaction log (for example,
Epicor10_log.ldf) should be located in separate storage locations. You enter these different locations by
selecting Files node and defining the path for each log.
TempDB Files
You should create one tempdb file for every two CPU cores available on your server. This increases concurrent
use of the server when the tempdb files are accessed.
These tempdb files should not be located on the same drive as your operating system. If possible, these files
should also be in a separate drive other than the Epicor ERP database.
Use the following script to add the correct number of tempdb files. You modify a value in this script to reflect
the number of CPU cores you have available:
USE [master]
GO
DECLARE @cpu_count int,
@file_count int,
@logical_name sysname,
@file_name nvarchar(520),
@physical_name nvarchar(520),
@size int,
@max_size int,
@growth int,
@alter_command nvarchar(max)
WHILE @file_count < @cpu_count 0.5 -- Add * 0.25 here to add 1 file for every 4
cpus, * .5 for every 2 etc.
BEGIN
SELECT @logical_name = 'tempdev' + CAST(@file_count AS nvarchar)
SELECT @file_name = REPLACE(@physical_name, 'tempdb.mdf', @logical_name +
'.ndf')
System Tips
Microsoft Tools
This section describes the performance tools available from Microsoft. These tools are either included in your
Windows operating system or can be downloaded from Microsoft.
You can evaluate the efficiency of a Storage Area Network (SAN) by running a Microsoft subsystem benchmark
utility called Diskspd. . Through this utility, you test your system against the Storage Disk I/O standard metric.
The following tests are designed to test various aspects of an I/O disk subsystem i.e. bandwidth (Mega Bytes/second
i.e. MB/ Sec), Latency (milliseconds), performance of your I/O system with desired block size (64KB) and file size
and type of I/O – read or write and sequential v/s random writes. The parameters described in previous statement
have a great impact on IOPS and hence they are specified exactly as needed here for testing using Diskspd. On
the same machine you will get different IOPS number if you change any one parameter. So testing with Epicor
recommended parameters is highly recommended.
You download the Diskspd utility from Microsoft. Then you run a series of these three tests through this utility.
The following steps describe how you install, set up, and test I/O disk subsystem performance.
2. Diskspd doesn’t require installation; Diskspd.exe can be run via command prompt (with elevated permissions).
Based on the version of SQL Server installed, choose the correct path:
• ..\Diskspd-v2.0.15\x86fre (For 32Bit).
• ..\Diskspd-v2.0.15\amd64fre (For 64Bit).
a. Open up a Windows Command Prompt on the server where you have downloaded and unzipped the
Diskspd utility. For example Start > Run > cmd, this opens up the command window.
b. Change the directory to the location where Diskspd utility is extracted. For example, C:\Diskspd-v2.0.15\.
Parameter Description
-w100 100% Writes, No Read
-t8 8 Worker threads used against test file
-o8 8 outstanding IO requests
-d900 The test will last for 900 seconds
-r Random write test
-b64k 64kb block size per IO
-h Disabling software caching, only hardware caching
-L Capture Latency Information
-c80G Creates a workload file of 80GB
C:\iotest.dat This is the workload file of 80GB that will be
created, the drive should be same as the one which
has .mdf file
C:\mdfiotestresult.txt The results of Diskspd would be printed on
C:\mdfiotestresult.txt
Parameter Description
-w100 100% Writes, No Read
-t2 2 Worker threads used against test file
-o8 8 outstanding IO requests
-d900 The test will last for 900 seconds
-r parameter missing Sequential write test
-b64k 64kb block size per IO
-h Disabling software caching, only hardware caching
-L Capture Latency Information
-c80G Creates a workload file of 80GB
C:\iotest.dat This is the workload file of 80GB that will be
created, the drive should be same as the one which
has .ldf file
C:\ldfiotestresult.txt The results of Diskspd would be printed on
C:\ldfiotestresult.txt
Parameter Description
-w100 100% Writes, No Read
-t8 8 Worker threads used against test file
-o8 8 outstanding IO requests
-d900 The test will last for 900 seconds
-r Random write test
-b64k 64kb block size per IO
-h Disabling software caching, only hardware caching
-L Capture Latency Information
-c80G Creates a workload file of 80GB
C:\iotest.dat This is the workload file of 80GB that will be
created, the drive should be same as the one which
has .mdf file
C:\mdfiotestresult.txt The results of Diskspd would be printed on
C:\mdfiotestresult.txt
Use the results of each test to evaluate how your disk I/O subsystem works in comparison to similar subsystems.
Event Viewer
The Event Viewer is an administrative tool installed with your Windows operating system. Use this tool to review
exceptions that occur outside of the Epicor ERP application.
The application server tracing log you activate within the Epicor Administration Console records business logic
exceptions internal to the Epicor ERP application. For example, this log records an error when too many characters
are entered in a field, a record already exists, how long it takes a business object method to run, and so on.
Any errors that occur outside of the internal business logic is recorded in the Event Viewer. Any errors you are
not seeing in the application server log should display in this administrative tool. Examples of items not captured
by this log include framework exceptions, security exceptions, fatal errors, and similar items.
Be sure to launch the Event Viewer to catch other problems that may slow performance. When you use the Event
Viewer with the application server log, you have a complete picture of the application, server, and network issues
you may be experiencing.
You access this Windows tool through a Windows control panel. Launch the Administrative Tools control
panel; the Event Viewer displays as a shortcut. You should add this shortcut to your desktop so you can more
easily launch this key tool when you need it.
The SQL Performance Dashboard displays a visual report that shows the current performance of your SQL Server.
This key tool provides you with a snapshot of how well the CPU is utilized, current waiting requests, and other
key performance information.
You can use this tool to gauge how well your Epicor ERP application is running. Since the Performance Dashboard
displays the current state of the system, launch this tool to both determine if poor performance is occurring and
evaluate the performance results after you have made a change.
The dashboard contains graphics and hyperlinks. You can click on these items to drill down further into the
accumulated information. Use the Performance Dashboard with the other performance tuning tools to develop
a complete understanding of your system and application performance.
You download this tool and then set it up to monitor your SQL Server.
While you can install the Performance Dashboard on your SQL®Server machine, you can install this tool on other
machines as long as you have SQL Server Management Studio available on these machines. You will also need
access to SQL Server. You can then indicate from which SQL Server instance you want to pull the performance
data.
2. Download the Performance Dashboard installer. Be sure to use the SQL Server 2012 version.
7. Right click the database you wish to review; from the context menu, select Custom Reports.
The SQL Performance Dashboard is now connected to your SQL Server instance and is displaying performance
data. Now you can always launch the Performance Monitor by first right-clicking the database, selecting Reports
from the context menu, and then selecting the performance_dashboard_main report.
The Performance Analysis of Logs (PAL) tool is a shareware program that evaluates server logs to generate a
detailed HTML performance report. This report evaluates several areas of your system and displays alerts when
logs exceed specific performance thresholds.
This report graphically displays key performance counters, and then alerts you when the system test surpasses
the optimal performance level defined for the counter. These thresholds were originally defined by members of
the BizTalk Server and Microsoft Support teams, and so the counters use established performance thresholds to
determine the report results. Use this report tool to both automate performance log analysis and save
troubleshooting time.
A common mistake when evaluating poor performance is to assume the cause is an application process, when
instead the actual issue is the overall scalability of the system. If too many users access a system that does not
contain enough resources to handle them, slow performance can result. The PAL tool generates a report that
can help you accurately evaluate whether the issue is application performance or system scalability. Generally,
poor performance has one of these main causes:
• Application Performance – The overall Epicor ERP application runs slowly due to a configuration issue or
some other cause. To resolve this situation, the application can be optimized in various ways. If the performance
issue persists, further performance optimization may be required from Epicor.
• Program Performance – When a specific program is run, like Customer Shipment Entry, it runs slowly. Due
to the bogged down database access, this can cause other areas of the system to run slowly as well. To fix
this performance, evaluate the process to determine if small changes can resolve the issue or if the optimization
needs to be handled by Epicor.
• Scalability – The application is set up correctly, but the daily load on the system slows everything down. To
improve this situation, better system hardware may be required.
The PAL tool generates an accurate report that can help you determine what specifically is causing the slow
performance. By evaluating the threshold alerts generated by this report, you can pinpoint the cause(s) of the
slow performance.
The PAL tool is available as a free download from the CodePlex website. To download this tool, navigate to the
following website: http://pal.codeplex.com/
PAL tool documentation is also available from Codeplex. To download sample reports and other documentation,
navigate to this website: https://pal.codeplex.com/releases/view/6759
The following workshop illustrates how you generate the logs you need for the PAL tool. It then shows you a
case study that configures and runs the PAL tool system report.
Important The Performance Analysis of Logs (PAL) Tool only runs on 64-bit machines. You will not be
able to install this tool on a 32-bit machine, so be sure to install the PAL tool in a 64-bit environment.
However, you can still upload system logs generated on 32-bit machines and analyze them through the
PAL tool.
Gather Logs
These instructions describe how you set up the logs you need to analyze through the Performance Analysis of
Logs (PAL) tool.
You run this log on the Application server(s) and the SQL server — one log instance on each server. To generate
the best results, run this log over a work day that experiences heavy network load.
To begin, you generate a .blg file through the Windows Performance Monitor (perfmon). This performance
tool is available on all Windows systems. Use it to record some key performance counters you later evaluate using
the Performance Analysis of Logs (PAL) tool.
1. From the Windows desktop, either click Start > Run or launch the PowerShell.
3. From the Tree View, expand the Data Collector Sets > User Defined folder.
4. Right-click this folder; from the context menu select New > Data Collector Set.
The Create new Data Collector Set window displays.
5. In the Name field, enter SystemOverview (or another name that helps you identify the log file.
7. Click Next.
8. You are asked which template you would like to use. Select the System Performance option.
9. Click Next.
10. Now indicate the folder where you would like the data to be saved. You can enter this Root directory
directly or click the Browse... button to find and select this folder.
12. You are next asked if you want to create the data collector set. Select the Save and close radio button
option.
14. Right-click the SystemPerformance node; from the context menu, select Start.
The Windows Performance Monitor now captures data throughout the working day and stores it in a .blg file in
either C:\PerfLogs or C:\PerfLogs\<Username> (where <Username> is the name of the user logged into
Windows).
Tip The number of CPU cores affect how long the BULK INSERT statements run on target machines. You
should see high use of the CPU cores on the target MSSQL database, as BULK INSERT statements work
effectively when parallel cores executive (MDOP), improving the performance of the server.
Activate PowerShell
The PowerShell program needs to be active in order for the PAL tool to generate Perfmon analysis reports.
The PowerShell script that PAL runs is not signed, so you may need to run the following command to execute
these unsigned scripts.
4. When you are asked if you want to change the execution policy, enter Y for Yes.
You are now ready to run the PAL tool to generate the performance report.
After you have collected the log data you need, stop the log and save the generated .blg file. You are now ready
to analysis the log through the PAL tool.
Do the following to generate the performance analysis. Be sure to run this process on a 64-bit environment.
1. Launch the PAL tool; you should be able to launch the program from the Start button. However, the default
install path is either C:\Program Files\PAL or C:Program Files(x86)\PAL.
The PAL Wizard window displays.
3. For the Counter Log Path, click the Browse (...) button to find and select the .blg file. For this example,
select the SamplePerfmonLog file and click Open.
4. Click Next.
The Threshold File options display.
5. From the Threshold File Title drop-down list, select the title of the file you used to gather data through
the Windows Performance Monitor (Perfmon). If you used the SystemOverview.xml file, you would select
System Overview from this list.
6. Click Next.
The Questions options display.
7. Answer the various Questions that appear under the Questions list. These questions identify the various
aspects of the system. Answer the following questions:
• OS – Defines the operating system and architecture you use.
• PhysicalMemory – Indicate how much RAM memory is available is available on the server in gigabytes.
• UserVa - Defines the size of the user mode virtual address space in megabytes (MB). The PAL Wizard
uses this value if you are evaluating a 32-bit system, but it ignores this value on a 64-bit system.
8. Click Next.
The Output Options display.
9. Indicate how many time intervals (timeslices) you want to use on the report. These time intervals separate
the log into regular time units. Click the Analysis Interval drop-down list to select the interval you need;
select AUTO to divide the log into 30 timeslices.
11. For the Output Directory file, accept the default or click the Browse (...) button to specify where you
would like to place the PAL analysis report.
12. If you wish, enter the HTML Report File Name you need. Notice by default it uses the [LogFileName] as
a prefix and adds a [DateTimeStamp] suffix.
14. Select the Execute: Execute what is currently in the queue radio button option.
The PAL tool will then analyze the .blg file. Depending on the size of the file, it may take as long as 2-3 hours to
finish the analysis. When the process is complete, the .html file is available in the File Output location you specified.
You display this report in your internet browser.
You click on links at the top of this .html report to locate the performance analysis section you need:
Tip If you need more help evaluating these logs, you could also gather these logs and send them to Epicor.
Epicor specialists can then evaluate the performance of the Epicor ERP application and the overall system.
Epicor has identified a series of SQL Server trace flags that you should activate for all your Epicor ERP installations.
These trace flags define server characteristics and actions which will improve how SQL Server interacts with the
Epicor ERP application.
Activate these trace flags:
You can use this information together with the deadlock graph to determine what processes are causing deadlocks.
To learn more about deadlocks, review the Deadlocks section later in this guide.
You can add SQL Server trace flags to the startup parameters for each instance of SQL Server. You do this through
the SQL Server Configuration Manager.
1. Launch SQL Server Configuration Manager. In some Windows installations, you can launch it by clicking
Start > Microsoft SQL Server > Configuration Tools > SQL Server Configuration Manager.
The SQL Server Configuration Manager displays.
3. Right-click the SQL Server instance you need to update; from the context menu, select Properties.
The Properties window displays.
5. In the Specify a startup parameter: field, enter the trace flag you want to activate.
6. Click Add.
The trace flag displays in the Existing parameters: field.
7. If you wish to remove the trace flag, highlight it on this list and click the Remove button.
8. Continue to add or remove trace flags to this SQL Server instance. When you finish, click Apply and then
OK.
Now the next time this SQL Server launches, these trace flags activate.
Add to Registry
Instead of activating SQL Server trace flags on each SQL Server instance, you can instead add them to the registry
for all SQL Server instances on your system.
If you have multiple SQL Server instances running, manually changing the startup parameters on each instance
can be time consuming. When you instead add these SQL Server trace flags to the registry script, all SQL Server
instances now use these trace flags.
Be sure you test this script against a SQL server instance. When the test works as expected, you can extend this
script to run against all SQL Server instances by using a Central Management Server.
3. You first indicate which trace flag activates. You add this flag to the @Parameters variable.
Example
declare @Parameters varchar(max)='.T1222',
@Argument_Number int,
@Argument varchar(max),
@Reg_Hive varchar(max),
@CMD varchar(max)
4. In case the parameter already exists, you next enter code to clean up the specified startup parameter. This
ensures the trace flag is reusable, so the code will not break if the parameter already exists.
Through the following code, you check the registry for a SQL instance that uses the sys.dm_server_registry
DMV. This will return the correct path for the startup parameters.
Example
if exists(select * from sys.dm_server_registry where value_name like 'S
QLArg%' and convert(varchar(max), value_data)=@Parameters)
begin
select
@Argument=value_name,@Reg_Hive=substring(registry_key,len('HKLM\')
+1,len(registry_key))
from sys.dm_server_registry where value_name like 'SQLArg%' and co
nvert(varchar(max),value_data)=@Parameters
set @CMD='master..xp_regdeletevalue
"HKEY_LOCAL_MACHINE",
"'+@Reg_Hive+'",
"'+@Argument+'"'
exec(@CMD)
end
5. You next leverage xp_regwrite to add the trace flag as a startup parameter.
Example
----------------------------------------------------{Add Parameter}
--select *from sys.dm_server_registry where value_name like 'SQLArg%'
select @Reg_Hive=substring(registry_key,len('HKLM\')+1,len(registry_key
)),@Argument_Number=max(convert(int,right(value_name,1)))+1
from sys.dm_server_registry
where value_name like 'SQLArg%'
group by substring(registry_key,len('HKLM\')+1,len(registry_key))
set @CMD='master..xp_regwrite
"HKEY_LOCAL_MACHINE",
"'+@Reg_Hive+'",
"'+@Argument+'",
"REG_SZ",
"'@Parameters+'"'
exec (@CMD)
7. For this update to run, your SQL Server instances need to refresh. You do this in SQL Server Management
Studio; stop and restart the SQL Server Service.
Now each time SQL Server instances launch throughout your system, the trace flag activates on them.
Customize Logs
The client (UI Trace) log and the server log have a series of default tracing options you can activate. These options
are described in the previous Server Diagnostics and Client Trace Analysis sections of this guide.
However you can also customize what the server log and the client log capture, displaying other operations or
server activity you may want to review. This section of the guide describes these custom options and how to
activate them.
You activate server logs within the Epicor Administration Console. You connect to an application server and then
launch the Application Server Settings window. Through this window you can select a series of standard and
advanced logging options. After you activate the options you want and click OK, the AppServer.config file
updates with your selections.
However, you can add other profiles and tracing options by manually updating the AppServer.config file. You
can then capture additional data to help you evaluate a performance issue and/or track server activity.
3. Within your text editor, click File > Save As and save the template as the AppServer.config. These default
profile and tracking options are now available to use.
4. You can activate a profile or tracking option by setting the disabled value to "false".
5. You can also add additional profiles and traces. Place the profile or trace syntax within the <TraceFlags>
setting.
The server log now records the profiles and traces you activated and added to the AppServer.config file.
Profile Options
This topic describes the profiles you can add to both server and client logs.
Important Be aware that you should not run profiles over an extended period of time. Profiles gather a
lot of information and will slow performance in your production environment. After you gather the data
you need, deactivate these profiles in either the AppServer.config or the <FileName>.sysconfig file.
• profile://system/db/epiprovider/sqltext -- This profile displays the text for queries run by the entity
framework (EF) provider. Like the epiprovider profile, it also captures EpiCommand activity. For example:
<EpiCommand queryType="SELECT" spid="53"table="[Ice].[Security]">
<![CDATA[SELECT TOP (1)
[Extent1].[Company] AS [Company],
[Extent1].[SecCode] AS [SecCode],
[Extent1].[EntryList] AS [EntryList],
[Extent1].[NoEntryList] AS [NoEntryList],
[Extent1].[SecurityMgr] AS [SecurityMgr],
[Extent1].[SystemCode] AS [SystemCode],
[Extent1].[SystemFlag] AS [SystemFlag],
[Extent1].[SysRevID] AS [SysRevID],
[Extent1].[SysRowID] AS [SysRowID],
[Extent1].[GlobalSecurityMgr] AS [GlobalSecurityMgr],
[Extent1].[CompanyVisibility] AS [CompanyVisibility]
FROM [Ice].[Security] AS [Extent1]
WHERE ([Extent1].[Company] = @p__linq__0) AND (@p__linq__0 IS NOT NULL) AND (
[Extent1].[SecCode] = @p__linq__1) AND (@p__linq__1 IS NOT NULL) AND (0 = [Ex
tent1].[CompanyVisibility])]]>
</EpiCommand>
• profile://system/db/hits -- This profile displays a text representation of every entity framework query
expression tree. For example:
<DBStatement type="XXXDef" duration="1.6358" rowCount="1" hashCode="570719193
4738794255" dbContextId="f8ca0589-de0c-4043-8cfb-27c741f90dec" />
<Expression hc="-8506620438275244128"><![CDATA[(ctx, cacheKey_ex) => Queryabl
e.FirstOrDefault(Queryable.Where( ctx.Cache, row => row.CacheName == cacheKey
_ex.CacheName && row.CacheKey == cacheKey_ex.CacheKey))]]>
</Expression>
• profile://ice/fw/reporting-- Activate this profile to display an instance for each rendered report. Each logged
instance displays the SSRS RenderFormat, the number of bytes used to render it, and how long it took to
render. Because users can create routing rules with breaks that generate multiple reports during the same
run, this profile can track a large number of reports. If a report has routing rules, each subsequent report
generated by these rules displays on a separate line. For example:
<ReportRendered reportPath="/reports/ChgLogReport/ChgLogReport" SSRSRenderFor
mat="PDF" bytesRendered="94348" duration="376.0153" />
• profile://system/db/stacktrace -- Use this profile to add stack trace information to the DBStatement. This
profile is useful when you need to figure out the source of a query that runs slow or freezes the application.
Each <stackFrame> instance displays the method attribute; this attribute contains a fully qualified method
name with its class name, return type, parameter names, and parameter types. Each instance also contains
the module name and, if available, the source file name and source code line number. For example:
<DBStatement type="ZDataField" duration="1.0003" rowCount="1" hashCode="51337
71154712867921" dbContextId="ff4cdc0b-9d83-48cd-b24c-1481a6702da9">
<stackTrace>
<stackFrame method="TResult Epicor.Data.DBExpressionCompiler.InvokeSing
le(Expression expression, Cache currentCacheSetting, Boolean cacheQuery, TCon
text context, Func`2 getDataCacheKey, Func`2 compileQuery, Func`2 executeQuer
y)" module="Epicor.System.dll" file="c:\_projects\RL3.1.1.0\Source\Framework\
Epicor.System\Data\DBExpressionCompiler.cs" line="352" />
</stackTrace>
</DBStatement>
• profile://ice/fw/tableset -- This profile displays attributes about the tableset intercepts that run from the
server such as BeforeGetRows, AfterUpdate, <Table>BeforeCreate, and so on. For example:
<RowEvent table="Tip" method="BeforeGetNew" rows="1" duration="3.924000000000
0004" />
<SysRowID>e0e63093-3348-4de6-b3ea-a33b015bfbe0</SysRowID>
<SpecifiedProperties>/28A</SpecifiedProperties>
<ColumnNames>Company</ColumnNames>
<Company>EPIC06</Company>
<CountFreq>11</CountFreq>
<ExcludeFromCC>false</ExcludeFromCC>
<StockValPcnt>99</StockValPcnt>
<PcntTolerance>3.00</PcntTolerance>
<CalcPcnt>true</CalcPcnt>
<CalcQty>false</CalcQty>
<CalcValue>true</CalcValue>
<QtyTolerance>0</QtyTolerance>
<ValueTolerance>100</ValueTolerance>
<ShipToCustNum>0</ShipToCustNum>
<SysRevID>0</SysRevID>
<BitFlag>0</BitFlag>
</ABCCodeRow>
</Table>
</UpdExtABCCodeTableset>
</UpdExtInput>
Available node names:
• UpdExtInput – input tableset.
• UpdExtReturn - tableset being returned to client.
• UpdExtGetNew – tableset after GetNew service call.
• UpdExtBeforeUpdate - tableset before sending to Update service call.
• UpdExtAfterUpdate - tableset returned after Update service call.
EpiCommand Attributes
Both the epiprovider and the epiprovider/sqltext profiles use the EpiCommand function.
The EpiCommand function records these attributes:
Attribute Purpose
table Displays comma-separated tables from the query.
proc Defines the name of the procedure name that ran.
spid Displays the Server Process ID, which is the @@SPID server variable.
lockHint Contains all lockHints in the sql statement, everything inside With(XXX).
queryType Displays comma-separated sql-statements in the query, such as
SELECT,Update,Delete,Insert.
hashCode Groups DBStatements with their generated SQL statements; you can then group the
DBStatements within the Performance and Diagnostic Tool. You collect the hashCode
attributes when either the profile://system/db/hits or trace://system/db/hits log options
are active.
The server traces capture specific operations or server activity that occurs between the client and the server.
They require less system resources. However as a good practice you should shut off traces you no longer need,
as typically you will not want to continually record this additional data.
• trace://ice/fw/boreader -- Run this trace to record information about BOReader method calls. For example:
<BOReader service="Ice:BO:Company" method="GetRows" pageSize="0" columnList="
ESEURL,ESENotificationSourceID"><![CDATA[Company = 'EPIC06']]></BOReader>
Possible attributes:
Attribute Purpose
service Displays the full name of the called service.
method Indicates either the GetRows or GetList method.
pageSize Displays how many items were requested to return (0 - unlimited).
columnList Indicates which columns are requested to return.
CData sections Displays the whereClause for the call.
• trace://ice/fw/cache -- Use this trace to display details about the Data Model cache. For example:
<DMCache msg="Adding item, CacheName:PropertyBag CacheKey:e476399e-d2ad-481d-
9b57-08e3e9f2e570" />
• trace://ice/fw/datacontext -- Activate this trace to display when a new DataContext transaction is instantiated.
For example:
<DataContext msg="Creating new DataContext, ID:704c4832-7f3d-4811-9244-4bc446
a323ec" />
• trace://ice/fw/disposable Run this trace to display errors that indicate possible memory leaks. For example:
<Error msg="Found 1 undisposed data context object(s) found after Ice:Lib:Ses
sionMod/SessionModSvcContract/Sync. IDs: 7ee60938-84d7-4327-8091-fdc825112e2b
" />
• trace://ice/fw/DynamicQuery -- Activate this trace to see query execution information. Use this information
to analyze performance issues. For example:
<Op Utc="2013-08-15T08:48:28.9994569Z" act="Ice:BO:DynamicQuery/DynamicQueryS
vcContract/Execute" dur="20.9593" cli="fe80::35a2:7247:6cab:d0e9%11:53737" us
r="manager" tid="17" pid="11176">
<BAQ QueryID="udfield" />
<BAQ Company="EPIC03" />
<BAQ SQLExecTime="12.2756" />
<BAQ DataFetchTime="3.0597" />
<BAQ PagingMethod="Simple" />
<BAQ PageNumber="1" />
<BAQ PageSize="2" />
<BAQ TotalTime="19.4385" />
<BAQ TotalRows="2" />
</Op>
• trace://ice/fw/perf/Initialize -- Use this trace to display details on when the framework and business services
initialize on the system. For example:
<Trace uri="trace://ice/fw/perf/Initialize" duration="0.0004" root="False" />
• trace://ice/fw/reporting -- Activate this trace to display a summary of reporting activity. Each entry displays
how many reports rendered, how long it took the business layer to generate report data, and how long it
took to write report data to the SQL temporary reporting database. For example:
<ReportInformation reportsRendered="2" bytesRendered="181272" renderDuration=
"764.6249" businessLayerDuration="53.9687" writeToSqlDuration="53.1175" />
• trace://ice/fw/session -- This trace displays details about the current session as well as the license associated
with this session. For example:
<Session msg="Obtaining license for session type:'DefaultUser' ID='00000003-b
615-4300-957b-34956697f040'" />
<License msg="Extending current License Claim on Session:00000003-b615-4300-9
57b-34956697f040" />
<Session msg="Refresh session: e476399e-d2ad-481d-9b57-08e3e9f2e570. Old Last
Accessed: 3/1/2013 7:40:00 AM" />
• trace://ice/fw/trigger -- Use this trace to capture details on when each trigger executes. For example:
<Trigger table="XXXChunk" type="Deleted" pk="XXXChunk<~><~>EP<
~>MainMenuHistory<~>MANAGER<~><~><~>1<~>" ro
wId="f1b05943-98b7-4deb-a0e0-6973e60f0e96" duration="0.1446" />
<Trigger table="XXXDef" type="Deleted" pk="XXXDef<~><~>EP<~>
;MainMenuHistory<~>MANAGER<~><~><~>" rowId="cfbaa373-
41f2-42ad-9ba4-32b1202b2b67" duration="0.1446" />
• trace://ice/log -- This trace activates the general log entry. The log will display the following tag:
<Log msg="some message” />
• trace://system/db/hits -- Activate this trace to track details about SQL Queries executed through the
framework. For example:
<Sql queries="23" cacheHits="0" time="302.791" qryTypeCount="6" />
You can review the custom server traces you add to the server log or the client log through the Performance and
Diagnostic Tool. These traces display as separate sections within the Results grid.
The following example illustrates how you display custom traces in a server log.
4. Navigate to the directory file location that contains the server logs you need to review.
6. Click Open.
The selected log(s) display in the Log File grid.
8. Group the results by method. Click and drag the Method Name column to the Drag a column header
here to group by that column area.
9. Expand the node for the Method Name that contains the traces you wish to review.
10. Now expand the node for the specific method call.
11. Scroll down through the results until you see the Trace category.
The custom server traces that activated during this method call display within this node.
Like server logs, you can temporarily customize the client (UI Trace) log so it runs with a series of custom trace
options. You can also manually customize client traces to record exception messages in a separate trace log from
the client log.
When you activate custom traces in the client log, you can test for a specific issue or gather additional information.
Through this option, you can receive more targeted information about each client call. You can then display this
information within the Performance and Diagnostic Tool for additional analysis.
Exception messages record anything unusual that may occur while a client runs. These messages can give you a
more accurate idea about what may be causing performance issues. To do this, you first activate this additional
log and then indicate which exception messages you want the log to track. You then define the logging level
for each exception message. You can set up a trace listener for an exception message to only write errors, only
write warnings, only display information, or write every error, warning, and information entry the trace listener
captures.
Important Be aware you should rarely customize the traces for the client logs. The changes you manually
set up are temporary; they only last during the current client session. When users logs out and then back
into the client installation, the application retrieves the tracing values from the database. This overwrites
the trace options you manually entered in the .config file. Only customize these log traces when you need
to review specific information during the current session not normally captured by the client log.
To set up traces for use during multiple sessions, select them through either User Account Security
Maintenance or the Tracing Options Form. The application stores the tracing options you select on
these programs and does not overwrite them when the user next logs into the client application.
You customize the client log by first setting up a new <YourFileName>.config file. You then activate the options
you want the client log to track.
3. If you optionally wish to record exception messages in a separate trace file, remove the comments from the
<trace> tag. By default, they display enclosed in comments:
<!--<trace autoflush="true" indentsize="4">
<listeners>
<add name="myListener" type="System.Diagnostics.TextWriterTraceList
ener" initializeData="c:\temp\EpicorTrace.log" />
</listeners>
</trace>-->
4. You next define the separate log file that will track the exception message information. In the initializeData
attribute, enter the directory path and file name for the exception message log file. For example:
initializeData="c:\temp\ExceptionTrace.log"
5. Now to customize the traces on the client log, locate the EpicorAppConfigFileName setting.
9. You next create this client .config file. To do this, launch the Epicor ERP application.
This creates the new <YourFileName>.config file in the directory you defined.
12. Activate the switches you need. You can enter either numeric values or text values:
• Inactive -- Either "0" or "Off"
• Errors -- Either "1" or "Errors"
• Warnings -- Either "2" or "Warnings"
• Information -- Either "3" or "Info"
• Verbose -- Either "4" or "Verbose"
Now when you or the user activates the trace log on the client installation, the log tracks the switches you turned
on in this file. You can then open these customized client logs in the Performance and Diagnostic Tool. Use
the Client Diagnostic features described previously in this guide to filter and analyze the results.
Remember these options only last during the current client session. The next time the user logs in, the
<YourFileName>.config is overwritten with values from the database.
For example, notice that check box options found on the Tracing Options Form and User Account Security
Maintenance, such as DataTraceFullDataSets (Write Full DataSet) and DataTraceIncludeServerTrace (Include
Server Trace) can also be activated in this file. When users activate or de-activate these same options on the
Tracing Options Form, the <YourFileName>.config updates when users click the Apply button.
The client (UI Trace) log captures the business object methods each client installation activates. These client logs
can also capture the datasets updated by the business object transaction.
You can activate the following custom traces on the client log:
• BitFlagViewWatcher - When you activate this trace, you launch the Bit Flag View Watcher utility. This
utility monitors EpiDataViews to determine whether the Bit Flag column has changed its value.
• DataTraceCCDataSet - If you wish to review the performance of BPM methods and customizations, activate
this client trace. The Call Context Dataset initializes when a user activates a program (UIApp) that either
launches a customized form or a BPM directive. As long as the program is active, method calls are sent to the
Call Context Dataset. To activate this trace on the Tracing Options Form, select the Write Call Context
DataSet check box.
• DataTraceFullDataSets - Activate this trace to record the entire dataset content including the method
parameter structure and data (if any) that passes between the client and the server. Each time a method sends
data, it appears in the client log with the method. When this check box is cleared, only the structure is recorded.
You can also activate this trace on the Tracing Options Form by selecting the Write Full DataSet check
box.
• DataTrace - Activate this client trace when you only want changes to the dataset recorded in the trace log.
All changes to columns in the dataset are stored in the log. You can activate this trace on the Tracing Options
Form by selecting the Track Changes Only check box.
• DataTraceReturnData - When a method call has a <returnType> other than void, activating this trace causes
the dataset returned from the server to display on the tracing log. Numerous method calls occur where the
data is passed down, modified, not written to the database, and then returned to the client. Selecting this
check box places these hidden calls on the trace log. Thisonly applies to datasets that get updated or returned
from the server call. Examples of these methods include Credit Checking, Part Verification and Pricing, and
GetNewXXX (where XXX is the name of a record). To activate this trace on the Tracing Options Form, select
the Write Response Data check box. You may need to clear this check box if your system has performance
issues when tracing is enabled.
Tip You typically select this option when you are developing a Service Connect workflow and need to
see when a non-obvious value is set by a method call. The tracing log displays “Before” and “After”
images of the dataset.
• DataTraceIncludeServerTrace - Activate this trace to include information from server processing in the client
trace log. This option is useful if you want to diagnose how client activity affects the application server. For
example, select this check box to see what server side calls interact with the client. You can activate this trace
on the Tracing Options Form by selecting the Include Server Trace check box.
You can also add server profiles and traces to the client log. Then when you select the Include Server Trace
check box, the client log captures these additional options. Use this feature when you want to track server
activity from a client machine instead to reduce the impact on performance. To learn what tracing options
are available, review the following Available Server Traces section.
• DeregistrationException - If you want the log to capture de-registration exception messages, activate this
client trace. These error messages then display in the log.
• DialogException - Turn on this trace when you wish to capture any dialog box exception messages that
displays while the client installation runs.
• LogException - A log exception message generates when the client log fails to run. This can occur when the
Logger object cannot be created or the log fails to run due to input/output (IO) problems.
• Catch - Activate this client trace option when you want to see exception handler messages in the client log.
These exception types are contained within catch blocks in the code.
• FormLoad - Activate this trace when you want to track each form (program) that launches on the client
installation.
• NotifyAll - When you use this trace, the client log traces all the threads waiting on a business object, identifying
which thread will process first, second, and so on. You can then identify which threads are waiting for other
processes to complete before they can run.
• TransactionLoad - Use this trace to determine how long it takes per second to load each transaction from
the client installation.
You might want to track the custom profiles and trace options from a client machine instead to reduce the impact
on performance. These tracing options will then only capture server activity caused by actions that occur on the
client installation.
To customize the operations or server activity recorded on the client log, you add a setting on the client installation's
.sysconfig file.
2. Make a copy of the default.syconfig file. Name this copy using a file name that helps you easily identify it.
Important You should not make changes to the default.sysconfig file. By creating a copy of this
default file, you can always use the original default file to revert the client installation back to its
original launch settings.
3. After you copy the .sysconfig file, set up the client installation to launch with this file. Right-click the Epicor
ERP client icon; from the context menu, select Properties.
4. Add the following run time argument (also called a switch) to the Target field:
/config=<FileName>.sysconfig
6. Add the <serverTraceFlagsSettings> node to the .sysconfig file. You can then place the server trace and
profile options within this .sysconfig setting. For example:
<serverTraceFlagsSettings>
<add uri="trace://ice/log" />
<add uri="profile://ice/fw/tableset" />
</serverTraceFlagsSettings>
a. When you run the application using the Classic Menu interface, you activate the trace log from the
Main Menu. Click Options > Tracing Options.
b. When you run the application using the Modern Home Page interface, you can activate the trace log
by clicking the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options
button. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu.
Click the Settings tile and the General Options setting. Select Tracing Options.
c. When you run the application using the Active Home Page interface, you can activate the trace log by
clicking the Utilities icon in the top right corner of the window. From the utilities list, select Tracing
Options. Alternately, in the Active Home Page interface, you can activate the tracing log from the Home
menu. Click the Settings icon and the General Options setting. Select Tracing Options.
10. To activate the client trace log, select the Enable Trace Logging check box.
11. Now select the Include Server Trace check box. This indicates the client trace log will use the custom server
profiles and traces you added to the .sysconfig file.
Your tracing options run immediately. As long as the Tracing Options Form remains open, the log captures the
server trace options.
Locking and blocking occurs when two or more database connections try to access the same piece of data
simultaneously.
A piece of data is locked when a connection needs exclusive access to it. SQL Server prevents, or locks, this piece
of data so it cannot be updated by other connections. This data is then blocked when another connection attempts
to access this data. This data becomes available when the first connection releases it. Similar to deadlocks, you
need to discover what connection(s) could not access the data (the victims) and what connection blocked the
data (the culprit).
Excessive locking and blocking slows down database performance. You can use the sp_lock3.sql stored procedure
to determine if a system is experiencing excessive data locking and blocking. By running the sp_lock3.sql stored
procedure, you can determine whether excessive locks and blocks are occurring on your database.
You download the sp_lock3.sql file through the Performance and Diagnostic Tool.
1. Launch the Performance and Diagnostic Tool. Depending on your operating system, you launch this tool in
different ways:
a. If you are on Windows SQL Server 2008 R2 or Windows 7, click Start > All Programs > Epicor Software
> Epicor Administrative Tools > Performance and Diagnostic Tool.
b. If you are on Windows SQL Server 2012 or Windows 8, press the <Windows> + F button to display the
Charms bar; from the Apps screen, select Performance and Diagnostic Tool.
4. Click OK.
The DeadlockAnalysis.zip file is downloaded to your computer.
You use SQL Server Management Studio to configure this stored procedure file.
1. Click Start > All Programs > Microsoft SQL Server 2008 > SQL Server Management Studio.
Microsoft SQL Server Management Studio displays.
2. Connect Microsoft SQL Server Management Studio to the SQL instance you wish to monitor.
3. Open the sp_lock3.sql script and run it against the database you wish to test. To do this, first click File >
Open > File and navigate to the folder that contains the sp_lock3.sql file; select this file.
4. In the Tree View, expand the Databases node and select the database against which you want to test for
locks and blocks.
5. Now click in the sp_lock3.sql pane; this causes the procedure to be in focus.
The log will run. Depending on the locking situation, it may show some lock data. However if you receive
a "Could not find stored procedure 'sp_lock3'" error message, the SQL procedure is not installed correctly.
Repeat these steps to re-install the script.
The sp_lock3.sql procedure is now configured to create a log against the selected database.
Now within SQL Server Management Studio, you need to schedule a SQL Server Agent job that runs the
sp_lock3.sql script and then appends its output to a log file.
1. Using the Object Explorer, expand the SQL Server Agent node; right-click the Jobs folder and select the
New Job... option.
The New Job window displays. By default, the General node is selected.
2. Enter a Name for your log file. For example, enter <XXX>_LocksLog (Where XXX are your initials or another
identifying value).
6. Enter a Step name for this job step. Enter <XXX>_JobStep01 (Where XXX are your initials or another
identifying value).
7. Click the Database drop-down list and select the database against which you want to test the locks and
blocks. For example, select Demo.
10. Use the Output file field to define the directory path for the log file. You can select the directory path by
clicking the browse (...) button.
Example C:\_PerformLogs\<XXX>LocksLog_Output.txt (Where XXX are your initials or another
identifying value)
1. Verify the SQL Server Agent is running. To do this, go to the desktop and click Start > All Programs >
Microsoft SQL Server 2008 > Configuration Tools > SQL Server Configuration Manager.
4. From the Object Explorer, expand the SQL Server Agent > Jobs node.
5. Right-click the <XXX>_LocksLog (Where XXX are your initials or another identifying value); from the context
menu, select Start Job at Step.
The Start Jobs window displays with both the Success icon and statuses listed in the grid.
The sp_lock3 output file is appended to the database. Locking and blocking data now records to this log every
minute. Track this data for a day and then return to this log to evaluate the results.
This section describes how you review the locks log and resolve the locking issues.
The sp_lock3 procedure only generates output when it detects locking and blocking issues. If the locks log is
empty, locking and blocking is not a performance issue on the database.
However if this log contains some locking and blocking entries, use the log to review what database connection
is trying to access the data at the same time.
In the above example, COMPANY/JoeS from the USERCOMP84 machine is accessing the Epicor ERP database.
The access currently given to USERCOMP84 is not recommended (this connection is the culprit), as it can cause
locking and blocking. In this case, Epicor would recommend that only the COMPANY/EPICADMIN login from the
APPSERVER machine have access to this data.
If the log records locking and blocking issues while users run custom or modified reports using an ODBC
connection, consider creating a table view for the report. Since database tables can become locked while the
report runs, the table view can prevent locking and blocking situations. To create a table view:
®
1. Open the report in a third party application like Microsoft® SQL Server Report Builder, Excel, or another
ODBC source.
When users access the table view, they cannot create a share lock on the data. This improves the performance
of the report.
Tip If you decided not to create a table view for the report, you should instead enter the with(nolock
) option in the SELECT statement for the table. You should also use this option to optimize performance
on Business Activity Queries (BAQs), Business Process Management (BPM) methods, and customizations.
The Blocked Process Report Event captures the details you need to identify blocked processes.
Turn on this event when you suspect your SQL Server is experiencing locking and blocking issues. Run this event
during a period when you think deadlocks are occurring. When you have gathered the trace log data you need,
shut off the event. You will then have a log that contains deadlock information to review.
You activate the Blocked Process Report Event by using the SP_CONFIGURE ‘blocked process threshold’ command
in SQL Server Management Studio. This command is typically set to 0, which indicates the event is turned off.
You set this value to a number higher than 0 to activate this event.
The ‘blocked process threshold’ command is an advanced SP_CONFIGURE command. Because of this, you must
first activate the ‘show advanced options’ command. After you activate this command, the ‘blocked process
threshold’ command is available to run. You activate both commands by entering the RECONFIGURE command.
To activate the Blocked Process Report Event:
2. Enter your credentials for the Epicor ERP SQL Server instance.
3. Click File > New > Query with Current Connection. The middle pane displays a blank query window.
5. The advanced options now activate. You next activate the Blocked Process Report Event. Enter the code
similar to the following:
SP_CONFIGURE’blocked process threshold’,10 ;
GO
RECONFIGURE
;
GO
Notice in the above example the number 10 value. This value indicates how often you want this event to
check for blocked processes, calculated in seconds. So in the above example, the event runs every 10 seconds.
If any blocked processes happen during each 10 second time span, the event captures them and the locking
event details are saved to the trace log. Enter the duration you think will best help you capture the locking
and blocking information.
Tip If you need to run the trace for a day or more, consider setting this duration for a longer time
period like 1800. This causes the event to activate every 30 minutes. The threshold can be set from
0 to 86,400 seconds (24 hours).
7. When you finish checking for deadlocks, return to SQL Server Management Studio.
Now that the event is active, you can run the trace log.
4. Enter a Trace name for the trace file. For this example, enter XXXLocks (where XXX are your initials).
8. Clear (deactivate) the check boxes for the rest of the events.
9. Click Run.
The trace log window displays. Let the trace run for as long as you need.
10. When you suspect enough time has elapsed, return to the trace log window. Click File > Stop Trace.
The data within the <inputbuf> tags contains the locking information. Review this information to determine
the source of the locking. However if the query is too large to fit in the <inputbuf> tag, the data is truncated.
To get the full query code, launch SQL Server Management Studio and enter this query statement:
select * from ::fn_get_sql(<SqlHandle>)
Copy and paste the query. Replace the <SqlHandle> tags with any of the <SqlHandle> values from the blocked
process report. This returns the full query to analyze.
When you review these blocked process entries, locate the lockMode value. This field displays the lock type that
was received or requested for the deadlock event. Some example lock types include 0=NULL, 8=IX, 16=Rangel-S.
These lock types will help you figure out why the lock occurred. You can review the various lock types in SQL
Server Books Online.
Besides the event, the accompanying .xml data might be helpful, as the values for SPID, username, current state,
and so on may help you locate the cause of the locking and blocking.
Be aware that each time duration interval may contain different blocking events. Look for events that are locked
the longest by evaluating the Duration value. If these events happen frequently, you can then identify the code
that activated the event. You should be able to determine the cause of the locking and blocking.
Resolve Locking
Blocking and locking can happen because of a number of reasons. Perhaps too much data is being returned, or
an index isn’t available, or SQL Server didn’t lock the best process. The following list identifies some solutions for
locking and blocking. As you analyze the events in the trace log, determine what would be the best solution for
resolving the blocking. Some ideas:
• Rewrite the code causing the lock.
• Fine tune your transactions so they run as quickly as possible.
• Return the minimum data amount that you need to complete a process.
• Add any indexes that may be missing.
• Verify both column and index information is up to date.
• Set the isolation level to the lowest setting that will work with your system.
If you would like more information on tracing locking and blocking, Microsoft has resources available on
technet.com and in SQL Server Books Online.
Deadlocks
A deadlock occurs when two users or sessions have locks on separate business objects, and each business object
process tries to establish a lock on the business object in use by the other user/session.
SQL Server automatically detects and resolves deadlocks. If a deadlock occurs, one process ends (the victim),
while the other process runs (the culprit). The victim transaction is rolled back. While determining the victim is
relatively easy, what you need to discover is the culprit process that caused the deadlock. This culprit could be
another Epicor ERP application process running at nearly the same time or a third party application preventing
the victim process to initialize.
You can monitor when deadlocks occur by activating trace log -T1222. If you discover deadlocks are happening,
you should then use the SQL Server Deadlock Graph. While this graph runs, deadlock events are recorded in
the SQL Profiler trace log.
Detecting Deadlocks
You can discover deadlocks by activating a trace flag in your startup procedures. When the -T1222 trace flag
runs against a SQL Server instance, the system will track deadlock errors and display them in the Event Viewer.
Tip Both Trace Flags and the Event Viewer are described previously in this guide. If you are not familiar
with these features, review the previous sections.
Trace Flag T1222 returns deadlock information using an XML format. The deadlock messages return these
attributes:
• deadlock victim - Displays the physical memory address of the task the system selected as the victim. If the
deadlock is not resolved, this value displays a 0 (zero) value.
• executionstack - Displays the Transact-SQL code that ran when the deadlock occurred.
• priority - Indicates the importance of the deadlock in comparison to other deadlocks that may be occurring.
• logused - Displays the log space used by the task.
• owner id - indicates the identifier for the transaction that controlled the deadlock request.
• status - Indicates the state of the task; this attribute can have one of these values:
• pending - The task is waiting for a worker thread.
• runnable - The task is ready to run but is waiting for a quantum.
• running - The task is running on the scheduler.
• suspended - The task is current not running.
• done - The task has finished processing.
• spinloop - The task is waiting for a free spinlock.
• waitresource - Indicates the resource this task needs to complete its run.
• scheduleid - Displays the scheduler associated with the task.
• hostname - Displays the workstation which hosted the task.
• isolationlevel - Indicates the current level of isolation for the task.
• Xactid - Displays the ID for the transaction that controls the request.
• currentdb - Contains the ID for the database.
When you discover a deadlock, you should next run the SQL Deadlock Graph. This graph will give you detailed
information about the causes of the deadlock.
Important Before you activate the SQL Deadlock Graph, you must set the Epicor server logs to the Verbose
setting. When you finish deadlock tracing, be sure to deactivate the server log.
You download the deadlock graph through the Performance and Diagnostic Tool.
1. Launch the Performance and Diagnostic Tool. Depending on your operating system, you launch this tool in
different ways:
a. If you are on Windows SQL Server 2008 R2 or Windows 7, click Start > All Programs > Epicor Software
> Epicor Administrative Tools > Performance and Diagnostic Tool.
b. If you are on Windows SQL Server 2012 or Windows 8, press the <Windows> + F button to display the
Charms bar; from the Apps screen, select Performance and Diagnostic Tool.
You access the SQL Server Profiler to activate the SQL Deadlock Graph and run a deadlock trace against a server.
To do this, launch SQL Server Management Studio.
1. Click Start > All Programs > Microsoft SQL Server 2012 > SQL Server Management Studio.
6. In the Trace Name field, enter XXXTrace (where XXX are your initials).
The words "Trace Start" display in the SQL Server Profiler and the trace begins. You typically run this trace log
for a full day to capture the potential deadlocks that occur on your system.
1. When you are ready to stop the deadlock trace log, click File > Save As > Trace File.
The Save As window displays.
2. Enter the File Name that helps you locate the file.
3. Either save the file to the default folder, or click the Browse Folders button to find and select a different
folder.
4. Click Save.
2. Using Windows Explorer, navigate to the directory that contains your tracing file. This file has the .trc
extension.
3. Click Open.
The trace log displays in the SQL Server Profiler view window.
5. The left bubble (it has an "x" through it) identifies the process that was the deadlock victim. Use these
values to determine which process was locked.
6. The Key Lock squares identify which culprit business objects were trying to establish a lock on the victim
business object. Both these culprit business objects requested an update to the same key on the victim
business object.
7. The right bubble identifies the business object that is the owner of the culprit business objects.
8. You can right-click in the deadlock graph to display additional information about the deadlocked transaction.
9. When you finish your analysis, close the SQL Server Profiler.
Tip To help you further analyze this data, you can also zip up this trace file and send it to Epicor Technical
Support.
Memory Leaks
The Epicor ERP application server is hosted within the w3wp.exe file. This file runs in Internet Information Services
(IIS).
You can launch the Windows Task Manager to see how much active memory the application servers are
consuming. During a typical work day, the Epicor ERP application processes a lot of data, so you may see memory
levels reach as high as these thresholds:
• Interactive Application Server – 20 Gigabytes
• Reporting Application Server – 30-35 Gigabytes
As long as w3wp.exe uses this much memory or less (Private Working Set memory), the Epicor ERP application
is consuming normal amounts of active memory. However if you are repeatedly seeing w3wp.exe take more
memory than these threshold levels over a period of several days, you may be experiencing a memory leak.
The best way to determine whether you have a memory leak is to review the w3wp.exe task after Internet
Information Services (IIS) is reset, but before you run the suspect process task. Check the Memory (Private
Working Set) value. Now run the suspect process task. While this process runs you may see a spike in memory
consumption; however as long as this value stays below the threshold, this consumption spike is normal. Wait a
few minutes after the task finishes, then return to the Windows Task Manager and check the Memory (Private
Working Set) value. If this value is high, you may have a leak.
Increased memory consumption can occur on the application server when you have business activity queries
(BAQs) and trackers that return more than 500,000 records. Likewise if you run several reports and processes at
the same time or generate large reports, you can use up too much memory. These situations slow performance.
Epicor Technical Support can help determine the causes of memory leaks. To give them a snapshot of your
system’s memory consumption, create a memory dump file and then send it to Epicor Technical Support for
review.
You use the Windows Task Manager to create the memory dump file.
Be aware this IS NOT a production friendly process. While you generate the memory dump file, the Epicor ERP
application will not run until this file is complete. You should only create a memory dump file as a last resort.
However if you are experiencing memory leaks, Epicor Technical Support will ask for this file to help locate the
issue.
This dump file can also be as large as the memory footprint it records, so it will take up a lot of space on your
C: drive. For example, a memory dump file that records 25 gigabytes of memory usage will use 25 gigabytes of
space. Be sure to move this file off your C: drive to avoid running out of hard drive space. To create the memory
dump file:
3. Right-click this file; from the context menu, select Create Dump File.
4. The Windows Task Manager stops the Epicor ERP application and generates the dump file on your C: drive.
After the file is created, the Epicor ERP application will resume.
7. Send this .zip file to Epicor Technical Support. Be sure to detail how often you are experiencing the excessive
memory usage on your system.
Because the Epicor ERP application is shut down while the memory dump file generates, Internet information
Services (IIS) may stop the memory dump. This occurs because the application pool does not respond to pings
during the dump, and so IIS recycles the application pool.
When IIS stops the memory dump process, you receive one of the following errors:
• The operation could not be completed. Only part of a ReadProcessMemory or WriteProcessMemory request
was completed.
• Could not create dump file *.dmp for process id*. GetLastError returns 0x8007012B.
Typically you receive this error when the memory dump file is very large. The process takes a long time to complete,
so eventually IIS checks, or pings, the application pool and notices the inactivity. To verify this situation is happening,
launch the Event Viewer. Navigate to the Windows Logs > System node to display the System Event Log. Look
for an error with the following values:
To prevent this error, you can either temporarily shut off the Ping function or increase how long it takes IIS to
ping the application pool.
a. To shut off the ping function, locate the Ping Enabled setting; from the drop-down list, select False.
b. To increase how long it takes IIS to ping the application pool, update these settings:
Setting Value
Maximum Response Increase to a value greater than 90.
Ping Period Increase to a value greater than 30.
7. After the memory dump file is generated, return to the IIS Manager and display the Advanced Settings
window.
Crash Diagnostics
You can configure error reporting so that if the Epicor ERP application crashes, a memory dump file automatically
generates for your review.
To activate this Microsoft feature, you need administrator rights on the server. You set up this option by adding
registry keys for the LocalDumps directory.
3. Click OK.
The Registry Editor displays.
5. Right-click the LocalDumps directory; from the context menu, select New > Expandable String.
A new string value icon appears for the registry.
8. This value defines the directory where the crash dump file will generate. Enter the
%LOCALAPPDATA%\CrashDumps directory.
Tip You will need to manually create this folder later through Windows Explorer.
9. Click OK.
10. Now repeat these steps to add more keys to the LocalDumps registry.
The next topic, Crash Registry Keys, describes the keys you add to the registry.
DumpType DWORD
Indicates the kind of dump file you will 2 – You should always run a full
(32-bit) Value
generate. Available values: dump. This ensures you have the
data available to review the causes
• 0 – Custom Dump for the crash.
• 1 – Mini Dump
• 2 – Full Dump
If the Epicor ERP application crashes, the system checks the registry settings to verify whether a memory dump
file needs to be created.
If the registry indicates that it automatically generates a file, the memory dump file is saved in the DumpFolder
location you defined. The system can build this file without rebooting Windows, so the rest of the system is not
disrupted while this file generates. After the memory dump file is created, the Epicor ERP application shuts down.
You can use this memory dump file to find out what caused the crash. You could also send this file to Epicor
Technical Support or your Epicor consultant for further analysis.
Disk Space
Be sure your disk has enough free space so the operating system can add multiple files to the CrushDump folder.
Once the system has generated some memory dump files, be sure to check how much disk space your CrashDump
folder is using. The crash memory dump files can take up a lot of disk space, so be sure to remove old files you
no longer need.
Load Distribution
You can improve how the Epicor ERP application performs by assigning different tasks to specific application
servers that can handle the load. This distributes, or scales out, the load more evenly across your system resources.
Note that typically you achieve better performance results by first improving your hardware. If you have the
resources to add more RAM, upgrade CPU processing, or increase disk capacity in your network or virtual
environment, you should do so. Also note that if you have less than two hundred users, the Epicor ERP application
can run efficiency through one server machine. However if you have more than 200 users, use VMWare (virtual
environment), or your organization requires special processing that requires significant system resources, consider
setting up multiple application servers to more evenly distribute the load. Not only does this improve performance,
but you also have spare capacity available to handle the load if a system component fails to run properly.
For example, you could assign a process that requires significant resources to run on a more powerful server, like
Material Requirements Planning, and then assign reports that generate less data to a server with fewer resources.
By distributing the load between these application servers, you reduce performance bottlenecks and match a
report/process with a machine best suited to run it. You could also assign users from different areas of the
company, such as the financials area and the shop floor area, to separate application servers and improve
performance. Moving heavy data processes like EDI or Epicor Service Connect to a separate application server
can improve efficiency as well.
You can scale out your system in many ways. Consider the data flow that occurs within your Epicor ERP application,
and develop a load distribution strategy that improves how this data flows between your users and the database.
Like all performance tuning options, distributing the load will not solve every issue. You should first evaluate your
performance situation before you commit time and resources to scaling out your system.
To begin, make sure the slow performance is not caused by an issue that needs to be resolved in a different way.
If the bottleneck is caused by a router or the network hardware itself, distributing the load will not improve
performance -- you instead need to upgrade your hardware. Likewise if the performance problem is caused by
a query or customization that requires a lot of processing resources, update or remove the query or customization.
Likewise if you are experiencing locking and blocking on your system, determine the cause of the locking and
blocking. Once you are sure the poor performance is not being caused by other issues, you can correctly evaluate
your system and determine how to best distribute the load.
Be aware that scaling out the load to multiple application servers can increase the cost of maintaining your Epicor
ERP application. Both the servers and the operating system will cost more to run, and IT personnel will spend
more time maintaining the larger system. The increased complexity will require that IT personnel look for issues
in multiple locations, so some situations may be more difficult to resolve. Likewise any setting updates and one-off
changes must be applied to all the machines linked within the distribution chain. So be prepared to handle these
increased costs in time and resources.
However scaling out the load can have significant benefits that offset these additional costs. The improved
efficiency will increase productivity throughout the quote to books data flow. Your internal users will become
more productive, making them better able to satisfy your customers.
To begin, the Hardware Sizing Guide can help you identify whether you should assign load to multiple application
servers. After you identify the hardware and usage scenario that best matches your organization, you can then
decide whether distributing the load will improve performance within your organization.
The next section illustrates some load distribution scenarios. The first scenario describes a base system, and then
the rest of the scenarios explore, through increasing complexity, how you can distribute the load to address
different environment needs.
Compare these scenarios to your Epicor application make up and user base. If your situation places similar demands
on your network, use these scenarios as a starting point for designing how you will scale out your environment.
Base System
This graphic illustrates a base system that does not distribute load.
In this scenario, the data activity comes from two sources -- Epicor users performing data entry and EDI entries
flowing from TIE Connect. All the processing for EDI, Epicor Service Connect (ESC), reporting, and data entry
runs through a single application server. Likewise this application server uses a single task agent to interact with
a single server machine. This server machine updates the Epicor database.
Additional Server
This scenario illustrates a load distributed system that now has two servers -- a SQL Active server and a SQL
Passive server. This creates a simple technique for distributing the data flow.
Once again, the Epicor application receives data input from users, EDI, and Epicor Service Connect. This system
runs using a single application server and task agent. But by utilizing two server machines, a SQL Active and a
SQL Passive server, you have a system that can better handle load. The SQL Passive server creates redundancy in
the system. If the SQL Active server stops running for a short or long period of time, the SQL Passive server can
then pick up the tasks and handle the load.
Specific Tasks
This scenario shows you a common way to distribute load. You assign specific tasks to different application
servers.
In this scenario, all the incoming load from Epicor users is run through a dedicated application server, the
Interactive Appserver. This application server only handles the tasks sent to it by Epicor users. The EDI and
Epicor Service Connect load is handled through a separate Heavy Processing Appserver application server with
more capacity. Lastly, all reporting and automatic data processing tasks are run by a third Reporting Appserver.
By dividing these tasks between different application servers, you reduce the chance these activities will cause
bottlenecks in the data flow.
Once again, the SQL Active and SQL Passive servers are also available to further handle the load and provide
redundancy in the system.
Tip Typically you should always distribute your reporting needs to a Reporting application server. This
ensures these tasks run independently from the data entry tasks.
Load Balanced
This scenario demonstrates how incorporating a load balancer such as F5 or Kemp in your system can further
performance. If you have a large number of users, load balancers can greatly improve the efficiency of your
system.
The data activity from the Epicor users is now first handled through a series of Citrix/RDP servers. The load
balancer ensures that none of these servers receives too much activity, so the data flow is optimized.
Notice the Interactive application servers are now placed together within an application pool. This pool handles
the data flow through yet another load balancer. The load balancer regulates the activity between the application
servers within the pool, sending a specific set of tasks to an application server that has available capacity.
Likewise two task agents now handle the tasks sent to the SQL Active server. By utilizing multiple task agents,
you add redundancy to the system. When one task agent is busy, another task agent can pick up the tasks and
handle the processing demand.
Tip Each application server can have up to three task agents. These three task agents can then interact
with the same server instance.
Similarly, the incoming EDI data and report/processing tasks are also handled by an application pool that contains
the Reporting AppServers. The load balancer regulates the EDI, reporting, and automatic processing tasks
between the application servers, once again distributing the load to an application server with available capacity.
These reporting, processing, EDI tasks are then sent to the SQL Active server.
If for some reason the SQL Active server cannot handle the load, the SQL Passive server can then pick up the
processing load.
Security Groups
This final scenario illustrates how a dedicated application server can distribute the data activity coming from a
specific group of users.
Notice data entry from most Epicor users is still handled by the load balanced Citrix/RDP servers, the load balanced
Interactive application pool, and the multiple task agents. However now the users assigned to the Account
Department security group have their own designated application server. Any data activity entered by these
users is handled by the Account Department application server and then sent to the SQL Active server.
Likewise, the incoming load from EDI and other integrated services is also run through a separate Heavy
Processing application server. This designated application server has a larger capacity to handle the load, and it
sends this data into the load balanced application pool that contains the Reporting AppServers.
The SQL Active server still receives all of this activity and updates the database. If the SQL Active server is unable
to handle all the incoming load, the SQL Passive server picks up this load and sends it to the database.
Components
This next Load Distribution section describes the components you can use to scale out the load across your system.
You can set up as few or as many of these components as you need to better facilitate the data flow throughout
your system.
Application Servers
An application server manages how a specific instance of the Epicor application runs. Through each application
server, you can configure licenses, companies, sessions, and users for a specific database.
To set up distributing the load, you first create application servers for each server machine available on your
system. You can set up multiple application servers to update the same database. For example, you create two
application servers for the same database, but these application servers are linked to different server machines
through their endpoint bindings. One application server is set up to run Epicor Web Access (EWA) on one server
machine, while another application server is set up to run a smart client through Net.TCP on a different server
machine. Likewise you could set up another application server that links to a machine which only handles SSRS
reporting tasks.
You add the application servers that will interact with your server machine through the Epicor Administration
Console. This management tool is located on your server installation. For information on how to add application
servers, review application help in the Epicor Administration Console.
Task Agents
The task agent handles all scheduled tasks for an application server.
The task agent activates any program added though a recurring schedule. Users add programs to recurring
schedules through the Schedule drop-down lists available on programs throughout the Epicor application. You
create these schedules in the Epicor application using System Agent Maintenance, and you also use this program
to create task agent rules to distribute the load.
Tip To learn how to assign tasks to automatic schedules, review the System Agent Maintenance topics in
the application help or review the Automatic Data Processing chapter in the Epicor Implementation User
Guide. This user guide is available on EPICWeb: https://epicweb.epicor.com/documentation/user-guides
You can set up as many as three task agents to distribute tasks to the same Epicor database. This allows for
redundancy, as if one task agent fails another one can pick up the tasks and send them to the server. Each task
agent picks up any tasks that are currently available to process. If a task is processed by one agent, it is not
processed by another task agent.
You create task agents using the Task Agent Service Configuration program. You launch this program from
within the Epicor Administration Console. When you select the application server, the center pane on the Epicor
Administration Console displays the settings for the application server. Click the Task Agent Configuration
button to set up the task agent for the selected application server.
Note If the Task Agent stopped working and the Event Log Viewer has none or insufficient information
on an event, check the Application log in the Microsoft® Event Viewer® for an event where the source
is Task Agent Service.
Load Balancers
A load balancer is an optional hardware device that distributes data flow across multiple application servers. If
your organization needs to process large amounts of data through EDI, MRP, reports, and similar tasks that
require a lot of resources, consider incorporating load balancers into your system.
A load balancer can improve the performance of the Epicor application by distributing tasks between application
servers contained within an application pool. The load balancer takes the incoming tasks and sends them to an
application server with the available capacity to handle these tasks.
® ®
You can purchase load balancers from Kemp , F5 , and other manufacturers. Review the information from
these companies to learn how to best incorporate load balancers within your system.
However you can also use Application Request Routing (ARR) to load balance your system. This Internet
Information Services (IIS) extension causes a server farm to also run as a load balancer between application servers.
When ARR is installed, the server farm can now route incoming message calls to multiple application servers,
improving network performance. If you would like to use ARR to load balance your system, review the Epicor
Installation Guide.
Security Groups
Use security groups to organize your Epicor user base into separate functional areas. An optional feature, you
can use security groups to both control access to specific areas of the Epicor application and regulate data flow.
You create security groups through Security Group Maintenance. This program sets up the identifier and the
description for the security group. You then assign users to these security groups through User Account Security
Maintenance. Your users are now contained in specific security groups, and you can grant or prohibit access
to different areas of the Epicor application through the security programs.
Tip For more information on setting up security, review the Security documentation in either the System
Administration Guide or the Implementation User Guide. The System Administration Guide is located
within application help. The Implementation User Guide is available both within the application help or on
EPICWeb: https://epicweb.epicor.com/documentation/user-guides
You also use security groups to distribute the load. To do this, you set up task agent rules that link a security
group to a designated application server. Now when users within this security group enter data activity, this
designated application server handles the load.
Menu Path: System Setup > Security Maintenance > Security Group Maintenance
Menu Path: System Setup > Security Maintenance > User Account Security Maintenance
System Agent
The System Agent regulates the processing between the Epicor application and the application servers. When
you first install the Epicor application or update your existing application, a system agent is automatically generated.
You manage the system agent through System Agent Maintenance. Use this program to modify settings on
the system agent, create rules to divide processing between different application servers, and define the default
schedules available to processes in ERP that can be run by automated schedules. Please note that there can only
be one system agent. You can delete it and create a new one, or just modify the existing one.
The system agent can contain multiple schedules. You can then assign specific tasks, such as reports and processes,
to run when the schedule activates.
Tip For more information on assigning tasks to schedules, review the Automatic Data Processing chapter
within the Implementation User Guide. The Implementation User Guide is available within both the
application help or on EPICWeb: https://epicweb.epicor.com/edu/user-guides
System agents are key for scaling out your system. Each system agent can have multiple task agent rules that
divide the system agent's processing between different application servers. You can set up task agents that
distribute the processing for specific tasks such as MRP or the Job Traveler report. You can also create task agent
rules that distribute processing for selected companies or security groups. By creating a series of task agent rules,
you can direct the data flow to the application server you designated to handle the specific load.
After you have set up the new application server and security groups for your users, you are ready to create task
agent rules that distribute the load between the available application servers.
2. Click the Agent ID… button to find and select the system agent. Only one system agent is available.
3. Click on the Actions menu and select the Edit Task Agent Rules... option.
The Task Agent Rules window appears.
5. Select the Company for which this task agent rule will generate tasks.
Only companies assigned to the current user account display on this drop-down list. The task agent rule will
then handle processing for the selected company.
6. Optionally select the Security Group for the task agent rule.
Whenever a user assigned to the selected security group runs a report or process linked to this task agent
rule, the application server linked to this rule generates the system activity.
7. Next define the Rule Type option. This value indicates what tasks are handled by the task agent rule.
Available options:
• Specific Task - Indicates this task agent rule will only run against a specific process. After you select this
rule type, you next select the process from the Process Id drop-down list.
• All Tasks - Indicates all processes are run against this task agent rule. Any time a process is launched by
users within a company or a security group, this task agent rule handles the processing.
• Specific Report - Indicates this task agent rule will only run against a specific report. After you select
this rule type, you next select the report from the Process Id drop-down list.
• All Reports - Indicates all reports are run against this task agent rule. Any time a report is launched by
users within a company or a security group, this task agent rule handles the report generation.
8. If you select either the Specific Task or the Specific Report rule type, you next select the Process Id for the
item you want the task agent rule to run. Depending on the rule type, either reports or processes display
on this drop-down list.
9. Enter the Appserver URL for the application server that will run the activity for this task agent rule. This
value links the task agent rule to the application server's location.
Tip You can find this value by opening the system configuration (.sysconfig) file for the client
installation. Locate the <appSettings><AppServerURL> node and copy this value.
10. Use the Endpoint Binding drop-down list to indicate how this application server checks for authentication
certificates through Internet Information Services (IIS). Select the Endpoint Binding defined on the application
server; the same options display on this drop-down list. Available options:
• UsernameWindowsChannel - This NET.TCP binding authenticates transactions through an Epicor
Username and Password. Windows checks for existing Epicor user accounts to authenticate logins.
• UsernameSSLChannel - This NET.TCP binding authenticates transactions using a Secure Sockets Layer
(SSL) X509 certificate. Leverage this method for application servers that handle smart client installations
when users reside in different domains. By using an SSL certificate, users from these different domains
can log into the Epicor application.
Selecting this option causes the SSL Certificate Subject Name and DNS Endpoint Identity fields to
appear. You use these fields to enter the name of your SSL certificate and the identity of the server.
• Windows - This NET.TCP binding authenticates transactions using a Windows Username and Password.
Any user with a Windows Username and Password within this domain can successfully log into the Epicor
application.
• HttpBinaryUsernameSslChannel - This HTTP binding protocol authenticates using a Secure Sockets
Layer (SSL) X509 certificate. The data transfers between the client and server using Hypertext Transfer
Protocol (HTTP). Instead of the transport, the message which contains the data transfer is encrypted.
Because this binding does not use Hypertext Transfer Protocol Secure (HTTPS), it tends to be slower than
bindings which use HTTPS.
Use this method for application servers that handle smart client installations when users reside in different
domains. By using an SSL certificate, users from these different domains can log into the Epicor ERP
application.
Selecting this option causes the SSL Certificate Subject Name and DNS Endpoint Identity fields to appear.
You use these fields to enter the name of your SSL certificate and the identity of the server.
• HttpsBinaryUsernameChannel - This HTTPS binding authenticates transactions using an Epicor Username
and Password. The data transfers between the client and server using Hypertext Transfer Protocol Secure
(HTTPS). HTTPS encrypts the data transfer.
The binding authenticates using a security token by specifying a valid authentication claim between
Epicor ERP and Epicor Identity Provider deployment. The data transfers between the client and server
using Hypertext Transfer Protocol Secure (HTTPS). This protocol is configured to move encryption handling
to an intermediary Application Request Router like F5 or a similar router.
Important When this binding is implemented, in order to avoid the AddressFilter mismatch error,
be sure to uncomment the AddressFilterModeAny node in web.config as shown below:
• HttpsBinaryAzureChannel - Use this protocol to enable authentication of ERP application users against
users in Microsoft Azure Active Directory (Azure AD).
This binding relies upon the user authenticating against Azure Active Directory and obtaining a token
to present to Epicor ERP. The data transfers between the client and server using Hypertext Transfer
Protocol Secure (HTTPS).
• HttpsBinaryIdpChannel - Use this protocol to enable authentication of ERP application against Epicor
Identity Provider (IdP).
Important Epicor Identity Provider is a new Global Authentication Service that unifies various
identity and authentication mechanisms across ERP products. The service will be made available
for approved customers in upcoming releases of Epicor ERP. By default, this option is only available
internally to Epicor.
This binding relies upon the user authenticating against IdP and obtaining a token to present to Epicor
ERP. The data transfers between the client and server using Hypertext Transfer Protocol Secure (HTTPS).
12. When you finish adding the task agent rule, click Save.
Continue to add the task agent rules you need. If you need to remove a task agent rule, highlight it on the grid
and click the Delete button.
Now the next time the system agent activates a schedule or a user launches a process or report, the tasks are
distributed to the application servers defined on the task agent rules.
Application Tuning
This section contains some application tips and techniques you can do inside the Epicor application. You should
only use the tips and techniques that match your use of the Epicor application.
Program Improvements
The Epicor ERP application is regularly tested for performance, and has demonstrated significant gains during
each service pack and major version release.
Various tests run against the same processes at the 8.03, 9.00, 9.05, and 10.00 releases demonstrate improved
performance with each new version. Complex processes such as Sales Order Entry and Job Entry were used to
evaluate the Epicor ERP application. If a system uses an earlier version of the Epicor ERP application, upgrading
to the latest version can help improve overall efficiency for the application.
Performance Enhancements
Review this list of enhancements to see if a program your organization uses is now improved for performance.
If it is, install the service pack in a test environment and evaluate the performance of the upgraded program. If
the program experiences better results, consider upgrading the Epicor ERP application to the latest release.
Application Troubleshooting
• Be sure to disable all schedules before you move from the training database to your live database. If you do
not, two MRP processes run at the same time, slowing down performance.
• Epicor recommends that you do not run MRP processing during the work day. Schedule MRP processing
during off hours.
• Be sure that your Business Activity Queries (BAQs) are properly written. If a BAQ pulls unnecessary data or a
calculation is improperly constructed, each time this BAQ is run it slows performance.
• Avoid creating several Business Process Management (BPM) procedures that generate email messages at the
same time (synchronously). If these procedures run frequently, performance is reduced. Instead, set up your
BPM procedures to run asynchronously.
Tip Remember you use the Epicor Administration Console to set up the server log so it records BAQ
and BPM calls. Then within the Performance and Diagnostic Tool, you can configure the Server Diagnostics
Results grid to only display BAQ and/or BPM calls. You do this by activating a filter in the MoreInformation
column.
• Avoid activating too many change logs. Only run change logs for the specific fields you need. When too many
logs are running, they slow performance.
• Performance slows when several large reports generate at the same time. To resolve this, move your report
generation processes to separate application servers. The previous Load Distribution section describes how
you set up task agent rules to distribute the load as needed.
• Likewise, when multiple applications run at the same time, performance slows. Try to only use the programs
you need when you need them. Close them if they are no longer active.
Client Cache
The Client Cache is the local Disk Cache used by the Epicor client machine. As the Epicor client needs various
items stored on the server and in the database, these items are downloaded and stored in the local Disk Cache
for better performance.
The local client cache stores items like custom context menus, themes, processing calling xref files, customizations,
personalizations, list of business objects access for security retrieval, and so on. Storing these items in the client
cache typically improves client performance, but sometimes too many items can accumulate in this folder and
cause slow performance instead.
As a standard practice, users should periodically clean out their local client cache to remove rarely used items.
To clear the cache:
2. From the Main Menu, click Options > Clear Client Cache.
3. When the client cache is cleared, exit the Epicor ERP application.
Now when you restart the Epicor ERP application, only items currently needed to launch the application are
loaded into the client cache. The client machine should have improved performance. However as time goes on,
more items will download to the client cache, and the user should run the Clear Client Cache option again.
If you would like to review the items currently in a client cache, you can locate this folder in the following default
directory locations:
• Older Windows Systems — Use Windows Explorer to navigate to the C:\Documents and Settings\All
Users\Application data\Epicor directory. Locate the name of the Epicor ERP client installation.
• Newer Windows Systems — Use Windows Explorer to navigate to the C:\ProgramData\Epicor directory.
Locate the name of the Epicor ERP client installation.
Tip You can change where the client cache is located by changing a value in the configuration settings
file. This xxx.sysconfig file (where xxx is the name of the file you use to run the client) is located in the
client's Config folder. Open this file in a text editor and enter a different directory path in the
AlternateCacheFolder setting.
Users can personalize programs by using memory caching. Memory caching keeps frequently used programs
in memory so they launch faster during the same Epicor ERP application session.
When a program is memory cached and the user closes the program, this program is still loaded in active memory.
Technically this program is really placed in hide mode. When the user launches the program again, the form is
already constructed in memory and quickly displays its interface.
Tip Do not confuse client caching with memory caching. Client caching refers to the various items that
move between the client and the server/database. Memory caching is a performance feature available
internally on most Epicor ERP application programs.
Memory caching takes resources from the system, and also uses other resources including user objects. A limited
number of user objects are available on each client, and when a client reaches that limit, the Epicor ERP application
can become unstable. It can vaporize or deliver strange messages. The default limit is 10,000 user objects. A
program like Customer Maintenance can use 1,800 user objects, so users should only select a few programs for
memory caching. If you need, administrators can increase the number of user objects available on a client to
18,000. However be aware that only 64,000 user objects are available for the Windows operating system, so be
careful how many user objects you allow each client machine to use.
Memory Cached forms are removed from active memory when the Epicor client session ends (the Epicor.exe
file is closed). Users should only use this feature on a limited number of commonly used programs they use on
a daily basis.
Client Customizations
Use this technique to test the performance of a client customization against the base program version.
Make sure the user account currently logged into the application has Customization privileges. Then turn on
Developer mode. Depending on the interface style of your application, the instructions vary:
• If you use Classic Menu interface, from the Main Menu click Options > Developer Mode.
• If you use Modern Home Page interface, activate this mode either from the Menu by clicking the Settings
tile and selecting Developer Mode in the General Options group, or by moving your mouse pointer over
the bottom of the screen and click the Developer Mode wrench icon on the Application Bar.
• If you use Active Home Page style interface, from the home page click the Utilities icon and select Developer
Mode from the list. Alternately, go to Settings > General Options and select Developer Mode.
Note You can also press Ctrl + Shift + D to activate Developer Mode.
Navigate to the customized program and launch it. When the Select Customization window appears, select
the Base Only check box. Click OK and time how long it takes to launch the program. Close the program and
launch it again, selecting the customization. Time how long it takes the customized version of the program to
launch.
Be sure to test how long it takes to process sales orders both with or without the customization. If you notice
better performance, make this customization available to all the users in the company.
The following tips can help you improve the performance of the Material Requirements Planning (MRP) process.
By following these MRP tips, you will place less demand on your network and server resources.
These tips can also help you receive more targeted unfirm jobs and suggestions. By following these best practice
tips, you will place less demand on your network and server resources and generate the specific MRP suggestions
you need.
Unless noted, most of these tips describe options on the MRP Processing window. Launch this program:
Menu Path: Production Management > Material Requirements Planning > General Operations > Process MRP
Epicor recommends you do not run MRP processing during the work day. Schedule MRP processing during off
peak hours. To set MRP Processing to run on a recurring schedule, access the MRP Processing window. First select
a Schedule option and then click the Recurring check box.
You create the available schedule options within System Agent Maintenance.
Menu Path: System Setup > System Maintenance > System Agent
Be sure to disable all schedules before you move from the training database to your live database. If you do not,
two MRP processes run at the same time, slowing down performance.
You can run the MRP process using two calculation modes - Net Change and Regenerative. These calculation
modes generate the unfirm jobs, job suggestions, and purchasing suggestions.
You should typically run MRP processing in Net Change mode as much as possible, as it reduces the number of
calculations that must complete during the process. When MRP is run in Net Change mode, the process ignores
all previously generated information and only updates suggestions for records changed since the date of the
previous MRP run. This calculation mode keeps your records current, only generating items that reflect new or
updated source requirements. It also reduces the resources required to run MRP on your server, generates the
MRP results faster, and frees up your server for other purposes.
Only use Regenerative mode through a regular, periodic schedule. Generally you should process MRP in
Regenerative mode once a week or once a month. This calculation mode deletes all previously generated MRP
information. Because of this, Regenerative mode actually runs through two routines - a routine that deletes all
of the unfirm jobs, job suggestions, and purchase orders, and then a second routine that generates and recreates
all of the suggestions.
While it is important to run a full regeneration periodically to make sure you have a complete set of records that
reflect the current state of your database, running MRP in Regenerative mode more frequently than once a week
will negatively impact the performance of MRP and your server.
Only activate the Run Finite Scheduling During MRP Calculation check box option if you require unfirm jobs and
job suggestions to be finitely scheduled in the near future. This check box is located on the Process MRP window.
Finite Capacity Scheduling is a calculation mode that does not allow more load to be scheduled above the
available capacity within a resource. The calculation also reviews any constrained materials involved in the selected
MRP jobs, preventing unfirm jobs from generating if the constrained material is not available at specific points
in the schedule. In order to run MRP processing in this mode, the scheduling functionality has to execute as well
to calculate the available capacity in each resource. This additional scheduling process requires significant resources
from your network to finite schedule the unfirm jobs and suggestions. Only jobs and suggestions that fall outside
the Finite Horizon value, a value defined for each site record within site Maintenance, are scheduled infinitely
(infinite scheduling places no limit on resource group capacity).
Epicor recommends you should only run MRP with this option when you need the generated unfirm jobs and
suggestions to more closely reflect the realities of your upcoming production schedule.
If you select the Sort 0 Level MRP Jobs by Requested Date check box, you may reduce MRP performance.
This situation occurs because the MRP process does not send unfirm jobs for zero assembly parts to the scheduling
engine until these jobs are completely processed by the MRP engine. This option improves the accuracy of the
schedule, but it typically adds additional processing time for the MRP results to generate.
While the MRP process runs, the server can generate multiple processing threads to complete the operation.
The more MRP processes you can run, the faster unfirm jobs and suggestions can execute and complete. You
can also improve performance by increasing the number of schedulers that can run on your server; the more
schedulers you can run, the faster the scheduling engine can schedule unfirm jobs. You modify the Number of
MRP Processes and Number of Schedulers values on the Process MRP window.
However as you increase the number of MRP processors and schedulers, the performance boost you receive will
eventually decline. This occurs because the server will run out of capacity to handle all of the multiple threads
you attempt to run concurrently. Your server has limited capacity, so there will be a point when using multiple
MRP processors and schedulers can slow down MRP performance as well.
To help you decide how many MRP processors and schedulers you can run at the same time, check your MRP
log to review the performance results. Continue to increase the MRP processes and schedulers until you notice
the MRP performance times begin to increase again. You should then be able to determine the optimal values
for your server.
Most servers can handle two MRP processes and two schedulers at the same time, so you could start by entering
a "2" value in both of these fields. As you run MRP, continue to monitor the MRP and Scheduling logs. If the
schedulers are consistently waiting for the next job, you have some options. You can remove one scheduling
thread to free up more CPU resources for the rest of the company. If you have CPU resources available, you can
also add one more thread to the Number of MRP Processes field and keep the Number of Schedulers value the
same. Likewise, if you notice times in the log where the MRP process threads are idle, they can also be used as
scheduling threads.
Tip For more information on processors and schedulers, review the MRP Logs section in the MRP Technical
Reference Guide.
The volume of suggestions generated by the MRP process can quickly become overwhelming. Too many suggestions
are difficult and time-consuming to manage and can hamper your ability to efficiency leverage MRP.
You can receive more appropriate results, however, by defining the Planning Time Fence value within either
Part Class Maintenance or Part Maintenance.
This value prevents changes to job suggestions, purchase suggestions, and unfirm jobs that occur within a specified
date range. If a Due Date on an MRP generated record occurs on a date between the Scheduled Start Date
(defined on the Process MRP program) plus the Planning Time Fence value, the MRP engine will not change the
Quantity and Date values on these previously generated records. Because these records are not updated, you do
not need to review these unfirm jobs and suggestions, reducing the number of results you need to verify.
Both Part Class Maintenance and Part Maintenance contain two other fields that can reduce the number of
suggestions you need to review. The Reschedule In Time Delta and the Reschedule Out Time Delta values limit
the number of new suggestions generated by an MRP process run.
The Reschedule In Time Delta value defines a date range during which the MRP engine is prevented from
rescheduling supply suggestions that happen in the future. Any supply record identified for change with an End
Date less than or equal to the final date of this range will not generate a new suggestion. The Reschedule Out
Time Delta value functions in a similar way, but affects demand suggestions. This date range prevents the MRP
engine from rescheduling demand suggestions that occur in the future. Any demand record that may need to
be changed, but has an End Date less than or equal to the final date of this range, will not generate a new
suggestion.
If you use a load balancer to improve performance, modify the system agent so it can find the main application
server that handles load balancing.
You update the system agent using System Agent Maintenance.
Menu Path: System Setup > System Maintenance > System Agent
Important This program is not available in Epicor Web Access.
You enter the load balancing application server within the Appserver URL field. An optional field, enter the URL
for this application server when you need to improve the performance of Material Requirements Planning (MRP)
sub processing or other sub tasks. (As of this writing, only MRP currently runs as a sub task.)
Because sub tasks run inside the main application server, the task agent does not process them. When you define
the load balanced AppServer URL, MRP sub-processing runs on this specific application server. Your other
application servers now have more resources to handle other processes, improving the overall performance of
your Epicor ERP application.
Example net.tcp://OurAppServer/ERP10/
If MRP temporarily or complete freezes, your system may be running out of available memory to handle the
process.
If this happens, you should contact Epicor Technical Support. They will determine what is causing the freeze and
help you correct this issue. However before you call support, be sure to gather the following information:
Memory/Stack Trace
Use the Performance and Diagnostic Tool to record both a memory trace and a stack trace. This information will
help Epicor Technical Support identify which part of memory is growing and whether the stack trace is active.
To generate these traces:
2. Within the Performance and Diagnostic Tool, navigate to the Live Memory Inspection sheet.
3. In the Process ID field, select the MRP process. This drop-down list displays all the current w3wp processes
with their application pool names as well as the names of any Epicor .exe processes. Typically you select the
application pool that runs the slow application server to gather the performance results you need to review.
6. Click Analyze.
7. You are warned this action will freeze the Epicor ERP application. Click Yes to run the Memory/Stack Trace.
You should generate this memory trace and stack trace three - four times. By creating multiple trace files, Epicor
Technical Support will have a good idea about how MRP runs on your system.
Memory Dump
If MRP freezes during your tests, capture a Memory Dump. This large file records how your system is using
memory at the time of the freeze. The memory dump file is a separate trace from the memory trace and the
stack trace. When you create a memory dump file, you record a snapshot of the memory and stack traces used
by the selected process. To capture a memory dump:
1. When the MRP process freezes, return to the Performance and Diagnostic Tool and navigate to the Live
Memory Inspection sheet.
2. From the Process ID drop-down list, verify the MRP process displays.
5. You are warned this action will freeze the Epicor ERP application. Click Yes to generate the memory dump
file.
You now have both a series of memory/stack trace files and a memory dump file. When you place your call with
Epicor Technical Support, send this files to them for further analysis. These files will help support more effectively
determine what is causing MRP to freeze.
In order to launch MRP via REST calls, an AppServer URL pointing to a Windows authentication binding or
Epicor user name/password binding must be specified on the System Agent Maintenance > Detail sheet.
Note this value cannot be set to Azure AD authentication binding.
This section details some performance tuning options for business activity queries. It also has a section which
describes how to correct SQL syntax errors.
You can define some application server settings in the Epicor Administration Console to restrict how business
activity queries (BAQs) generate results. By defining these options, you can limit performance issues caused when
BAQs process large amounts of data.
Note that when you change these application server settings, you will cause the application server to restart. Be
sure to change these settings during a period of the day when few users are logged into the Epicor application.
1. You launch the Epicor Administration Console from your server machine. Depending on your operating
system, you launch this tool in different ways:
a. If you are on Windows SQL Server 2008 R2, click Start > All Programs > Epicor Software > Epicor
Administrative Tools > Epicor Administration Console.
b. If you are on Windows SQL Server 2012, press the <Windows> + F button to display the Charms bar;
from the Apps screen, select Epicor Administration Console.
2. From the tree view, expand the Server Management node and Epicor Server node.
4. Now from either the Action menu or the Actions pane, select Application Server Settings.
The Application Server Settings window displays. The BAQ values you modify are in the Application Settings
group box:
5. For the BAQ Query Max Result Rows field, leave the default setting at 0, indicating there is no row limit.
If you have a BAQ that is returning a large number of rows and is affecting performance, enter a value
(number of rows) in this field to limit the number of rows returned. This is similar to using the TOP clause
in SQL.
Tip If you do limit rows, you may not see a record you are expecting. Instead of entering a value
here, consider adding or adjusting criteria to your BAQ to make it more efficient.
6. Now in the BAQ Query Timeout field, enter how many seconds can elapse before the application server
stops the query.
By entering a value in this field, you define how long each BAQ is allowed to run. When a query attempts
to generate results and reaches this time limit, the application server stops the query and sends the user a
time out message.
8. The Server Manager dialog box displays, asking if you want to restart the application server. If this is a
good time to restart the server, click Yes.
Now when this application server processes BAQs, the queries generate using these row and timeout limits.
This section contains a series of best practice methods that will help you develop more efficient business activity
queries (BAQs). If you follow these suggestions, you will have more success creating both display-only and
updatable BAQs.
Sorting Performance
Sorting data by a selected column is a powerful feature, but be aware that some significant processing time may
be required to display the reordered results. This situation is especially true when you sort a large amount of
data. The query tool has to first return all of the records into memory before it can re-order their sequence through
the selected column.
All of this processing occurs on the server, so the data calls need to move across the network before they arrive
at your client workstation. So if you sort on a large amount of data, be patient – the reordered results are on
their way.
Runaway BAQs
If you suspect a business activity query is causing poor performance, use the server logs and the Performance
and Diagnostic Tool to determine which BAQ is causing an issue. You do this by accessing the web.config file
and then setting the server log to Verbose. Run the process that launches the BAQ. When the BAQ completes
its run, open the server log in the Performance and Diagnostic tool. Navigate to the Server Diagnostics > Results
sheet. Group the results by Object Name and review the Execution Time values.
SQL function syntax is stricter than Progress syntax. If you previously ran the application using a Progress database
but now have moved to SQL, you may experience these syntax issues.
The main issue is that you can use abbreviations within Progress; for example, ABSOLUTE can be abbreviated to
ABS, ABSO, or ABSOL within Progress. BAQ formulas are directly sent to SQL. As long as these formulas do not
contain any abbreviations, they work as expected. However if a formula references an abbreviation, syntax issues
occur.
The following table displays the functions which are not identical between Progress and SQL. The characters
contained between the parentheses are optional characters in Progress.
This section outlines a series of tests you can run on your Epicor ERP application to verify its performance against
established metrics. These metrics were defined using a test machine.
Configuration of the test machine:
• Windows Server 2008R2 64Bit
• CPU Dual Core Intel xeon 5160 @ 3GHz
• 20 GB RAM
• Epicor SQL database installed on separate database server with 1G bit/sec Ethernet connection
• Epicor version 10.0.600
Test Setup
1. Install the Performance and Diagnostic Tool on your server machine as described in the previous section.
2. Log into your Epicor ERP application using both the Training database and the manager account. For the
User ID, enter manager; for the Password, enter manager.
3. From the Menu tree view, navigate to the Epicor USA company.
5. Close the System Monitor. If the System Monitor is running in the Task Tray, then right click on this icon
and select Exit. This eliminates GetRowsKeepIdle time calls to the server.
Test Procedure
The following test procedure provides you with a repeatable path that uses standard data delivered within the
Training database.
The results from the tests are captured in the client trace log. You can then analyze these results using the
Performance and Diagnostic Tool.
2. Depending on the interface style of the application, you launch the tracing log window in the following
ways:
a. When you run the application using the Classic Menu interface, you activate the trace log from the
Main Menu. Click Options > Tracing Options.
b. When you run the application using the Modern Home Page interface, you can activate the trace log
by clicking the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options
button. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu.
Click the Settings tile and the General Options setting. Select Tracing Options.
c. When you run the application using the Active Home Page interface, you can activate the trace log by
clicking the Utilities icon in the top right corner of the window. From the utilities list, select Tracing
Options. Alternately, in the Active Home Page interface, you can activate the tracing log from the Home
menu. Click the Settings icon and the General Options setting. Select Tracing Options.
Regardless of which method you use, the Tracing Options Form window displays.
4. Notice the Clear Log button. At certain points during the following tests, you will be asked to click this
button.
Clicking this button removes results from the trace log. You can then run the test and only the specific calls
you want display in the client trace log.
5. Notice the Write button. At certain points during the following tests, you will be asked to click this button.
Clicking this button causes business object calls to be recorded in the database.
This test measures the observed time it takes for you to open a typical large form. This test will use Sales Order
Entry.
You will clear the Client Cache to measure how long it takes for the form to load initially after a new installation
or service pack/patch upgrade. You will then run the test twice to measure the time it takes the form to load
using both uncached memory and then cached memory.
After you complete the tests, you will write the client trace log file and use the Performance and Diagnostic Tool
to analyze the results.
8. Depending on the interface style, you launch the tracing log window in the following ways:
a. When you run the application using the Classic interface, you activate the trace log from the Main Menu.
Click Options > Tracing Options.
b. When you run the application using the Modern Shell interface, you can activate the trace log by clicking
the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options button.
c. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu. Click
the Settings tile and the General Options setting. Select Tracing Options.
Regardless of which method you use, the Tracing Options Form window displays.
10. Now from the General Options list, select the Clear Client Cache option.
11. When you are asked if you want to clear the client cache, click Yes.
12. Return to the Home screen and navigate to Sales Order Entry again.
13. Using the stopwatch on a smart phone or similar device, test how long it takes Sales Order Entry to display.
Activate the stopwatch and launch Sales Order Entry.
14. Finish stopwatch recording the moment the form displays on your screen and the cursor flashes in the Sales
Order field.
15. Record this elapsed time as the First time form download and open value.
17. Once again, use a stopwatch on a smart phone or similar device to record how long it takes for the Sales
Order Entry form to display and the cursor to appear. Launch Sales Order Entry.
18. Record this elapsed time for the Second time form open value.
19. Return to the Tracing Options Form and write to the Trace Log as described in the previous Activate the
Trace Log section. Note the Log File name; this value uses the TraceDataxxx.log file format.
22. Browse to the Client File Trace Path to locate the client trace file you wrote as described previously.
23. Click the Generate Diagnostics button to capture and review the performance results.
Expected Results:
• First time form download and open: < 8 seconds
• Second time form open: < 6 seconds
The following screen capture shows an example of the business objects and their performance times. You should
see similar results on your Summary sheet.
If you see more object calls than above and have much slower performance, it may because the Sales Order Entry
form properties are not cached. If this is the case, you may see additional Lib type calls as shown below:
Use the following steps to verify and fix issues with the form performance test.
1. Locate the xxx.sysconfig configuration settings file used by the client installation. This file is typically located
in the Config folder under the client direction.
Default values:
• <MaxBOMRU value="100" />
• <MaxClssAttrMRU value="20" />
5. Now locate the cached Business Object and Class Attribute .xml files.
Directory paths on a Windows 7 client:
• C:\ProgramData\Epicor\<server-port>\10.0.700\EPIC03\BOSecMRUList\BOMRUList_<username>.xml
• C:\ProgramData\Epicor\ <server-port> \10.0.700\EPIC03\ClsAttrMRUList\ClsAttrMRUList_<username.xml>
6. If the SalesOrder business object does not appear in either of these xml files, it implies that other business
objects are used more frequently on this client and are not cached. To fix this situation, do the following:
f. Review the two xml files. Among the other business objects, you should see a reference to the Sales
Order business object.
This standard metric test measures database retrieval time by selecting and paging through customers.
You will start the client trace log to capture the time taken to select 10 customers and page through them within
Customer Maintenance. After you complete this test, you write to the client trace log file and use the Performance
and Diagnostic Tool to analyze results.
1. Depending on the interface style, you launch the tracing log window in the following ways:
a. When you run the application using the Classic interface, you activate the trace log from the Main Menu.
Click Options > Tracing Options.
b. When you run the application using the Modern Home Page interface, you can activate the trace log
by clicking the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options
button. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu.
Click the Settings tile and the General Options setting. Select Tracing Options.
c. When you run the application using the Active Home Page interface, you can activate the trace log by
clicking the Utilities icon in the top right corner of the window. From the utilities list, select Tracing
Options. Alternately, in the Active Home Page interface, you can activate the tracing log from the Home
menu. Click the Settings icon and the General Options setting. Select Tracing Options.
Regardless of which method you use, the Tracing Options Form window displays.
3. Return to the Home screen and click the Menu button. Navigate to Customer Maintenance:
Menu Path: Sales Management > Order Management > Setup > Customer
6. Return to the Tracing Options Form and clear the Trace Log as described in the previous Activate the
Trace Log section.
7. Return to the search window and click the Search button to retrieve customer records.
10. Using the Navigation toolbar, click the Right Arrow button nine times to display the next nine customers.
11. Write to the Trace Log as described in the previous Test Setup section. Copy the Current Log File; this
value uses the TraceDataxxx.log file format.
13. Click on the Client Trace Analysis option on the Plug-Ins pane.
The Client Trace Analysis interface displays.
14. Either paste or browse to the Client File Trace Path to load in the client trace log you generated.
15. Click the Generate Diagnostics button to capture the performance results.
16. Click on the Summary tab and expand the GetByCustID Method Name.
Expected Results:
• Observed time to move between customers: < 1 second
Example beakdown by business object method from the Summary sheet:
This standard metrics test measures database update performance. During this test, you will create a sales order
that contains twenty detail lines.
You will enter a sales order header and then start the client tracing to capture the time it takes to enter the 20
lines. After you complete the test, you will write the client trace log file and use the Performance and Diagnostic
Tool to analyze the results.
6. Verify that the first five column headers on the Lines > List sheet match the column headers in the table
below.
If not then rearrange the columns to match this sequence:
7. Create a spreadsheet that contains these twenty sales order detail lines (or copy and paste these lines from
this electronic document).
8. Clear the Trace Log as described in the previous Activate the Trace Log section.
9. Copy the twenty detail line from your spreadsheet into your clipboard. Do not select the column headers.
10. Right click above the column headers in the Lines>List sheet; select Paste Insert from the context menu.
11. Wait while the twenty order detail lines are loaded into the Sales Order Entry form.
12. Write to the Trace Log as described in the previous Activate the Trace Log section. Note the Log File name;
this value uses the TraceDataxxx.log file format.
16. Navigate to the directory location that stores the log file you just generated, such as
C:\ProgramData\Epicor\log.
17. Select the most recent log file. It should have today's date and a recent time stamp.
18. Click the Generate Diagnostics button to capture the performance results.
19. Navigate to the Summary sheet and review the TotalExecutionTime time for each method. Note each
method was called 20 times.
Expected Results:
• Total observed time for 20 lines: 36 Seconds
This value is measured from the Results tab as the difference between the Start Time of the first method and the
End Time of the last method.
This standard metrics test measures database update performance. During this test, you will create a purchase
order that contains twenty detail lines.
You will enter a purchase order Header and then start the client trace log to capture the time it takes to enter
the 20 detail lines. After you complete the test, you will write the client trace log file and use the Performance
and Diagnostic Tool to analyze the results.
5. Verify that the first seven column headers on the Inventory sheet match the column headers in the table
below.
If not then rearrange the columns to match this sequence:
6. Create a spreadsheet that contains these twenty purchase order detail lines (or copy and paste these lines
from this electronic document).
7. Clear the Trace Log as described in the previous Test Setup section.
8. Copy the twenty detail line from your spreadsheet into your clipboard. Do not select the column headers.
9. Right click above the column headers in the Inventory sheet; select Paste Insert from the context menu.
10. Wait while the twenty purchase order detail lines are loaded into the Purchase Order Entry form.
11. Write to the Trace Log as described in the previous Activate the Trace Log section. Note the Log File name;
this value uses the TraceDataxxx.log file format.
15. Navigate to the directory location that stores the log file you just generated, such as
C:\ProgramData\Epicor\log.
16. Select the most recent log file. It should have today's date and a recent time stamp.
17. Click the Generate Diagnostics button to capture the performance results.
18. Navigate to the Summary sheet and review the TotalExecutionTime time for each method. Note each
method was called 20 times.
Expected Results:
• Total observed time for 20 lines: 26 Seconds
This value is measured from the Results tab as the difference between the Start Time of the first method and the
End Time of the last method.
Support Checklist
Epicor Technical Support can resolve most issues that occur with your Epicor ERP application. However to more
efficiently resolve a problem, the support analysts need detailed information about your issue and your overall
system.
You can significantly shorten how long it takes Epicor Technical Support to review and analyze your issue by first
eliminating its potential causes. By following a series of tests, you verify whether a customization or a Business
Process Management (BPM) directive is the source of the problem. If a customization or a BPM directive is the
cause, you may be able to resolve the issue without contacting support.
However if these tests do not resolve your issue, you need to contact Epicor Technical Support. Before you call
and/or email, you gather a series of logs and system files. You then compress these files into a single archive (.zip
or .rar) file. Send this file to Epicor Technical Support as an email attachment or upload it to the Epicor FTP site.
You should also make sure you have thoroughly documented the issue by providing details about your system
and the steps required to duplicate the issue. By gathering this information before you contact support, you will
reduce the number of calls and emails required to thoroughly explore and resolve your issue.
If you complete these tasks and the performance issue continues, you next pull together the information for the
support call.
Gather System Information and Logs
Gather the following information and place it within a central folder.
After you have finished gathering this information, you are ready to contact Epicor Technical Support. Create
the support call and send the files you gathered to Epicor Technical Support for review.
The following series of topics describe each Support Checklist step in detail. Be sure to follow these steps to
ensure you are gathering the correct information.
Do the following series of tests to verify this issue occurs in the base Epicor ERP application. Through these tests,
you may discover the source of the issue is a customization, personalization, or a Business Process Management
(BPM) directive.
Before you do these tests, be sure to log into the Epicor ERP application through a user account that has
customization privileges. If you need, launch User Account Security Maintenance to find and update a user
account with these rights.
8. Next log into the Epicor ERP application using this account.
Check Customizations
For the first test, verify whether this issue is caused by a customization or a personalization of the base form.
4. Click OK.
a. If the issue still appears, the customization or personalization is not causing the problem. Shut off
Developer Mode and move onto the Disable BPMs test.
b. If the issue does not appear, the customization or personalization is causing the issue. Contact the person
who created the customization/personalization to fix the error.
You next verify whether a Business Process Management (BPM) directive is causing the issue.
4. Copy the web.config file and paste it into a separate directory. You can then restore this original file later.
5. Now return to the Server subfolder and open the original web.config file in Notepad or a similar text
editor.
9. To activate this change, you need to recycle the application pool. Launch the Epicor Administration
Console.
11. Right-click the application server icon; from the context menu, select Recycle IIS Application Pool.
12. A message displays asking if you are sure you want to recycle the application pool; click Yes.
14. Once again, duplicate the steps that cause the issue.
a. If the issue still appears, BPM directives are not causing the issue. Return to the server machine and
reactivate BPM processing by changing the web.config setting back to customizationSettings
disabled="false". You have now eliminated the possibility that the error is caused by either a
customization or a BPM directive, and you should start preparing the data required for the support call.
b. If the issue does not appear, you next must verify whether your BPM directives need to be recompiled.
This may resolve the issue. Move onto the next Recompile BPMs Directives test.
The issue may be caused because your BPM directives are out of date. By recompiling them, you update the BPM
directives to match the current version.
a. If the issue still appears, you have eliminated both customizations and BPM directives as the source of
the issue. You next should gather the information Epicor Technical Support needs to analyze the issue.
Move onto the next Main Details topic.
b. If the issue does not appear, recompiling the BPM directives may have corrected the issue. To make sure,
try recompiling the BPM directives again in a test environment for the live database. If the issue still
doesn't appear, the issue is resolved. However if the issue appears again, gather the information and
files you need for the support call.
Identify Program
Be sure you determine the program or programs affected by the issue. If multiple programs are affected, create
a file that contains this information for later reference.
Documenting the specific programs affected by the issue is a crucial checklist task. Be sure you keep track of
which programs are experiencing issues, as you will need to prominently list them at the beginning of your
support call.
Main Details
To begin preparing your support call, gather these primary details. Be sure to have this information available at
the beginning your call or email.
1. Site ID -- The identifier for your Epicor support account. Epicor Technical Support can then verify whether
you are on a maintenance plane.
2. Company Name -- The name of your organization. This will help support analysts check on previous calls
from the same company.
3. Call Number -- If you are contacting support about an existing issue, include the call number from the
previous call. The support representative can then look up the history for this call.
4. Epicor Version -- Include the exact number for the version of the Epicor ERP application you use. For
example: 8.03.400, 9.05.700, 10.1.300, and so on.
If the issue causes an error message to display, be sure to save the log that generates with the error message.
Error messages display in a dialog box. This dialog box has options you use to save the error message log. To
display and save the error message log:
1. Press and hold the <Ctrl> button; now click the <Insert> button.
The error message log is placed on your clipboard.
You now have an error message file you can send to Epicor Technical Support.
Issue Details
2. Has the Epicor ERP application always have this issue, or did it start after a change was made to the
application?
4. Does this issue affect a single user, a group of users who work in the same area (for example, shop floor
users), or all users?
5. Does the issue happen on multiple workstations, or does it just happen for a specific user?
6. What is the specific program or programs affected by the issue. Be sure to indicate whether this is an issue
with Job Entry, MRP Processing, Sales Order Entry, and so on.
The Steps Recorder is a Windows utility that records the steps required to duplicate an issue. It also saves screen
captures of each step, so Epicor Technical Support can then review these screen captures.
This utility was introduced in the Windows 7 and Windows 2008R2 versions.
1. Before you begin, you should create a EpicorSupport folder. You will place the files and logs you gather
in this central folder.
6. Click the Down Arrow; from this drop-down list, select Settings.
8. Click OK.
12. Navigate to the directory where you are saving the support files.
You next gather information on how your Epicor ERP environment is configured. You do this by locating a series
of system files, copying them, and then compressing them.
Tip If you haven't already done so, create an EpicorSupport folder. As you copy system files and generate
server logs, place them in this central EpicorSupport folder.
System Information
Next gather the following information about your Epicor application server and Internet Information Services (IIS).
3. Enter msinfo32.
4. Click OK.
The System Information window displays.
7. Click Save.
The file that contains your system information is saved to this location.
Important If your Epicor ERP application and SQL Server are on the same machine, you only need to do
these steps once. If these applications are on different machines, repeat these steps on each machine.
4. For this next command, enter Epicor Site ID and Server Name in the designated parts of the command
statement. Enter Backup-WebConfiguration -Name: [SiteId]_[ServerName] and press <Enter>.
11. If you have multiple application servers, repeat steps 8-10 to create archives for each W3SVC1 folder. Be
sure to identify which archive belongs to which specific application server.
Configuration Files
Copy the web.config an app.config file for the Epicor ERP application.
1. To find where the web.config file is located, launch the Epicor Administration Console.
3. Now launch the Application Server Configuration window. You can do this in the following ways:
a. Right-click the [ApplicationServer] node; from the context menu, select Application Server
Configuration.
5. Review the directory path that displays in the Web Site Directory field.
® ®
6. Use Windows Explorer to navigate to this directory.
Now gather the information support needs for the task agent and any additional setup configuration information.
® ®
1. Launch Windows Explorer .
4. If any Setup Configuration folders display in this folder, compress them as well.
4. Copy the following files and place them in your EpicorSupport directory:
• Application.evtx
• Epicor App Server.evtx
• Epicor ICE Task Agent service.evtx
• EpiSSRS.evtx
• System.evtx
You next gather performance data from the Epicor ERP application. You do this by generating client (UI Trace)
logs and server logs. You also capture system log information.
You can generate client (UI Trace) logs either by activating them on a user account or by directly activating them
on the client installation.
b. On the Detail sheet, click the User ID... button to find and select the Manager user account.
f. Click Save.
The next time a user logs in with this account, the client (UI Trace) log will generate.
a. When you run the application using the Classic Menu interface, you activate the trace log from the
Main Menu. Click Options > Tracing Options.
b. When you run the application using the Modern Home Page interface, you can activate the trace log
by clicking the Down Arrow at the bottom of the window. From the toolbar, click the Tracing Options
button. Likewise from the Modern Shell interface, you can activate the tracing log from the Home menu.
Click the Settings tile and the General Options setting. Select Tracing Options.
c. When you run the application using the Active Home Page interface, you can activate the trace log by
clicking the Utilities icon in the top right corner of the window. From the utilities list, select Tracing
Options. Alternately, in the Active Home Page interface, you can activate the tracing log from the Home
menu. Click the Settings icon and the General Options setting. Select Tracing Options.
f. Now activate the program, process, or report that is causing the performance issue.
The client (UI Trace) log generates, using your selected options.
Run these logs for as long as you need; when they have gathered enough information, you can then deactivate
them.
You next generate one or multiple application server (appserver) logs to send to Epicor Technical Support. These
logs are key for resolving your issue, as they help the support team see what is occurring in your system.
You first activate server logs in the Epicor Administration Console. Then repeat the activity in the Epicor ERP
application that caused the issue. If you are tracking a performance issue, you may need to generate multiple
server logs to record during what time periods your organization experiences slow performance.
Tip To learn more about evaluating performance issues and the tools you can use, review the Performance
Tuning Guide. This guide is available in the application help in the System Management > Working With
System Management node.
2. Launch the Epicor Administration Console. If this program is not on the desktop, launch a search to find
it. You can also launch it by clicking Start > All Programs > Epicor Software > Epicor(VersionNumber)
> Epicor Administrative Tools > Epicor Administration Console.
4. Select the application server that runs your Epicor ERP application.
5. Launch the Application Server Settings window. You can do this in the following ways:
a. Right-click the [ApplicationServer] node; from the context menu, select Application Server Settings.
7. Now for the Max Log File Size field, enter how large each file size can grow until the Epicor Administration
Console creates a new server log file.
Notice you can limit the file size using Bytes, Kilobytes, Megabytes, and Gigabytes.
a. Verbose Logging -- Causes the server log to display all the details for each method call sent to the
server.
b. Trigger Hits -- Records additional information about trigger activity that occurred.
c. BPM Logging -- Tracks any Business Process Management (BPM) directives currently running on your
system.
d. Detailed Exceptions -- Displays information about any exception messages that displayed while the
server log ran.
e. ERP DB Hits -- Adds information about database activity to the server log. This activity originates from
the Epicor ERP application.
f. BAQ Logging -- Tracks any Business Activity Query (BAQ) transactions that ran against your Epicor ERP
database.
a. System DB Hits -- Adds information about database activity to the server log. This activity originates
from the server.
b. System Table Methods -- Details any methods that the system ran.
Important If you are not reporting a performance issue, do not select these Advanced Logging
options. These options will slow system performance.
13. Now within the Epicor ERP application, either repeat the steps that caused the issue or run the daily routine
over a series of working days.
®
14. When you are satisfied that the server log(s) contains enough information about the issue, use Windows
®
Explorer to navigate to the c:\inetpub\wwwroot\EpicorTest10\server directory.
16. Compress these files using 7zip, WinRAR, or the built in Windows zip utility.
Tip Epicor Technical Support recommends you always generate server logs. The Standard Logging options
do not affect performance, so by continuously generating server logs you will already have a series of logs
to send to support. However if you generate server logs using the Advanced Logging options, be sure to
shut them off after Epicor Technical Support has received enough data.
Capture Logs
You next capture application server logs and database server logs by first indicating which types of logs you want
to save to the Results Path location(s). You then run the capture process.
This process creates a backup copy of each application server log and/or database server log. The original logs
are still available in the server directory location.
2. From the Plugins tree view, select the Log Capture node.
The Server Information > App Servers sheet displays.
7. Select the Backup All Servers Logs check box to cause the Performance and Diagnostic Tool to copy all
server logs to the Results Path location(s).
8. If you want to include event logs as well, select the Backup All Event Logs check box.
9. To add the web.config and machine.config (configuration) files to the Results Path location(s), select the
Backup webconfig and machine config files check box.
The Log field details the capture process and indicates when this process is complete. The Performance and
Diagnostic Tool selects logs from the application servers and the database server directory folders and then copies
them to the Results Path location or locations. You can then access these log files from the designated folder or
folders.
After you have gathered the files and logs described in the previous topics, you are ready to send the data to
Epicor Technical Support and start your support call.
1. Navigate to the EpicorSupport folder that contains the files you generated and gathered.
2. Compress these files contained in this folder. Use 7zip, WinRAR, or the built in Windows zip utility.
3. If you have a call number for this issue, name the file [CallNumber].zip or [CallNumber].rar. If you do not
have a call number, use your Site ID and name the file [SiteID].zip or [SiteID].rar.
a. If the compressed file is less than 5MB in size, send the file as an email attachment.
b. If the compressed file is larger than 5MB in size, upload the compressed file to the Epicor Support FTP
site. The rest of the steps in this topic describe how you upload the file to this site.
6. Log into the FTP site. Enter the User name and Password you use to log into EpicWeb.
9. When this process is complete, send Epicor Technical Support an email that details your issue and include
the name and the size of the uploaded file.
10. Epicor Technical Support will review your information and contact you as soon as possible.
Once you have determined some actual and potential causes for poor performance, you are ready to try some
options.
The main strategy to remember is always change one aspect of the system at a time. That way you can clearly
evaluate the benefits and costs for each change. When you try a performance option, do the following:
• Test how long it takes to run a process before you use a performance option. Then after you implement the
option, test the same process again. You should see a significant savings in performance time.
• Be sure to record each change and why you made it, as you then can review what you did later on. Write
comments in your scripts, customizations, web configuration file, and other locations to document the changes.
Remember you can gain a lot of performance by just doing a few things. Usually adding memory and spreading
the disk workload across as many disks as possible gives you the best performance gains. Always stop after you
have accomplished enough; the more tuning you do, the smaller the return on your investment and time.
Index
-t1117 127 client logs, access 31
-t118 127 client logs, activate from client 28
-t1222 127 client logs, activate from user account 26
-t12453 127 client logs, add server traces 142
client logs, automatically capture 80
client logs, custom 139
A client logs, customize 139
access client logs 31 client logs, generate 212
access server logs 61 client logs, results 32
actions, command line pdt 87 client logs, send to epicor 37
activate log from client 28 client logs, summary 35
activate log from user account 26 client results, export 36
activate server log 58, 77 client trace analysis 26, 28, 29, 30, 31, 32, 35, 36, 37, 38
add files 38, 73 client trace options 141
additional server scenario 172 client trace settings 30
analyze client logs 32, 35 command line pdt 86, 87, 88, 91, 92, 93, 95, 101, 102, 105,
analyze configuration 43 107
analyze server logs 61, 68, 70 common patterns 13, 14, 15, 16, 17, 18, 19, 20
application data, gather 212 configuration check 40
application request routing 176 configuration check settings 40
application server information, gather 210 configuration files 210
application servers 175 configuration settings file 196
application troubleshooting 182 configuration, command line pdt 87
application tuning 181 configure the sp_lock3.sql file 144
appserver random crash 13 cpu spike 13
arr 176 crash diagnostics 168, 169, 170
crash files, generate 170
crash registry keys 169
B crashes 19
create sql job 145
baq best practices 191 custom client logs 139
baq performance tuning 189, 191 custom format 39
baqs, application server settings 189 custom server logs 131, 137
base system scenario 172 customer retrieval 197
baseline performance test 193 customer retrieval test 197
best practices, baq 191 customizations, check 205
blocked process report event 155, 156, 157 customize client logs 139
blocking and locking 144 customize logs 131
bpm directives, disable 205
bpm directives, recompile 206
business activity queries, application server settings 189 D
business activity query 191, 192
business activity query performance tuning 191 database properties 114
dataset log options 29
deadlock graph setup 160
C deadlock graph, activate the sql 160
deadlock graph, review 162
capture logs 76, 80, 81, 214 deadlock graph, stop 161
capture logs, fields 82 deadlocks 159
clear filters 38, 74 deadlocks, detect 159
clear results 38, 74 default log file, command line pdt 101
clear selected 39, 74 dependencies, command line pdt 86
client cache 182, 194 developer mode 183
client cache, locate 183 diagrams 110
client customizations 183
client diagnostic scenarios 39
client logs 26
task agent rules, create 177 ui trace logs, activate from client 28
task agents 175 ui trace logs, send to epicor 37
tempdb files 115 ui trace, customize 139, 141
test form performance time 195 unable to reproduce 18
test procedure 194 user account, activate log 26
test setup 193
trace flags, sql server 127
trace log 14, 194, 197, 199, 201
V
trace options, client 141 vmware best practices guides 12
trace options, server 136
trace settings, client 30
W
U write permission 57
ui trace logs 26