Anda di halaman 1dari 8

I.

Executive summary

II.

Comparison of accuracy based on dates


Table 1. Comparison of achieved accuracy rating using processing date 1 and
verification date2

1 processing date date when the orders were processed by SPi

III.

The objective of this simple study is to see what would be the achieved
accuracy if the orders will be grouped base on their processing date instead
of when the orders were verified or inspected by MHE.
In addition, the team would also like to see if flagged errors by MHE is
accurately accounted base on the date when the orders were processed. This
is in the assumption that there might be orders that were verified late and
the flagged errors were already addressed on previous week(s).
Shown in Table 1, the differences of the accuracy rating between the two
dates is not significant as we only recorded an average of 0.04 points. The
difference was caused as MHE verifies the order files 1-2 days after SPi
processed the orders.
Based on the given document records, whether the team use the processing
date or verification date of MHE, all flagged errors are accounted properly. No
encountered error relating to the assumption made on previously bullet.
Accuracy
a. Accuracy computation
Currently, the team is following the MHE way of accuracy computation.
To get the accuracy rating per order:
Order 1:100 (no. of error * (weight))

Order 1: 100 (2 * (2.9))


Case 1: one error type
100 5.8
(2) Contact Name
Order 1: 94.2
Order 1:100 (no. of error * (weight))
Case 2: > error type Order 1:100 (1 * (2.9))+(2*(0.65))))
100 (2.9+1.3)
(1) Contact Name and
(2) Sales Channel
Order 1: 95.8
To get the overall accuracy rating per order:
2 verification date date when the orders were inspected and verified by MHE

Just average the accuracy of order


Order
Order
Order
Order
Order

1:
2:
3:
4:
5:

94.2
95.8
100
100
100

Overall Accuracy: 98

It is worth to mention that MHE way of accuracy computation is more lenient


compared to the acceptable way of accuracy computation, where:

Accuracy = 100

(no. of error*weight))

Overall Accuracy = 100-((1*(2.9))+((1*(2.9))+


((2*(0.65))+0+0+0)))

Overall Accuracy:
92.9

b. Weekly trending
Figure 1. Weekly: Error and Accuracy Trending.

The data are provided by MHE. SPi downloaded the inspected and verified
orders in MHE quality system on a weekly basis to check the achieved overall
and per agent accuracy for the week.
SPi also verifies the inspection results on a weekly basis and provides the
verification results to MHE. SPis focuses in the validity of flagged errors
based on the provided instructions, updates and standard operating
procedures. Reconcile the discrepancy on the data and accuracy rating if
flagged errors found invalid.
Since MHE started the live production last September, SPi already processed
an approximately 7000 orders.
Based on the provided data, there were 5045 (72%) processed orders were
inspected and verified by MHE and out of which, 326 (6%) are found with
errors.
It is worth to note that the team managed to lessen the errors week-on-week
as they acquire learning curve this yields to 77% error count reduction.
However, the error count reduction doesnt influence the overall accuracy to
improve as each error has its own classified weight according to criticality.
o MHE provided the list of error opportunities in an order.
o It consist of 46 error types. Each error type has its own weight based
on criticality.
o The weight ranges from .32, being the lowest, to 3.55 being the
highest.
o If an agent processed an average of 20 orders per day, lets say this is
equivalent to 100 orders per week. Table 2 suggests how many error
count(s) are allowed for the agent not to get an accuracy lower than
99.97.

Table 2. Simple Simulation: allowable error count per weight (if


N=100)

Shown in Figure 1, on the first six week SPi achieved and exceeded the
overall accuracy threshold. Please note that MHE set a lower threshold on the
said weeks as SPi, at that time, is still undergoing training and guided live
production.
Starting WK 6, the chatter has become stricter in terms of their responses
and MHE limit some assistance to SPi agents as the skills are already built up
and the training has already completed. On WK 7, the threshold was raised
to 99.97. Since then, SPi consistently missed to achieve the threshold
please note the WK 16 almost hit the set threshold.

c. Accuracy Trending per Agent


Table 3. Week-on-week Agents Accuracy

Despite that SPi recorded a below par overall accuracy rating. In agent level
performance,
shown
in Table 3, there
are 5 agents
that
hitting the
Met or exceeded
the threshold
Released
from MHE
QAalready
inspection
accuracy of 99.97 and 100%.
It is also worth to mention that these agents also released from rigorous MHE
quality inspection. Only those processed orders tagged with hold are the one
will undergo MHE inspection. This will give them a great chance of achieving
a consistent higher accuracy rating that would also reflect to the overall
performance of SPi.
It is highly suggested to continuously coach and mentor the two remaining
agents, Jaime and John, for them to be released in MHE stringent inspections.

d. Overall Top contributor


Table 4. Top Ten Error (covering WK7 to present)

IV.

Shown in Table 4, is the top 10 error contributors. It is worth to mention that


the error relating to Incorrect/No Hold Applied, which is consistently topped
since week one, has been zero out for two consecutive weeks already.
On the other side, errors relating to Attachments, payment terms and request
date has doubled their digits since it was last acquired.
These errors was resulted due to inattention to details. The agents missed to
verify and validate the needed information in order forms and MHE systems.

Comparison of internal QC vs MHE QC


a. Number of orders QCd by SPi and Number of orders QCd by MHE
b. What are the difference
i. Discrepancy on chased error
ii. Highlight the weaknesses of SPi QC
c. Resolutions

Anda mungkin juga menyukai