Anda di halaman 1dari 6

Data Improvement in Lab Verification of Smart

Power Products using DoE


Anamaria Oros, Ingrid Kovacs, Marina Topa
Bases of Electronics Department
Technical University of Cluj-Napoca
Cluj-Napoca, Romania
anamaria.oros@bel.utcluj.ro

AbstractThe paper focuses on treating and characterizing


variations that occur in measurements or that are intrinsic in
electronic systems. Methods illustrating basic principles of
Design of Experiments such as replication and blocking are
implemented and used so that valid and objective conclusions are
drawn. Replication is used for three purposes: verifying the
metamodel adequacy, defining the confidence interval of the
measured mean value of the systems response and determining
the minimum number of replications that are needed in order to
measure the systems response with a given accuracy. Blocking
helps with eliminating the systematic error that appears due to
the measuring conditions and was used in the screening process.
The methods were applied on a lighting control system used in
automotive applications.
Index TermsDesign of Experiments, Metamodelling,
Blocking, Replication, Optimization, Real-Time Load Emulation.

I. INTRODUCTION
Electronic engineers are dealing with great challenges when
trying to improve the quality of a product and achieve
robustness. The difficulty lies in finding a solution to lower
product cost and shorten product design and development time.
The process is becoming time-consuming because of the
increased complexity and because the system might be affected
by its real-life application environment. Without the aid of
automation the verification of the product cannot be fulfilled in
due time.
Experiments play an important role in observing the effect
of each factor, but due to expenses only a limited number of
experiments can be performed. In order to solve the problems
implied by a big amount of data, Design of Experiments (DoE)
and metamodelling approaches can be used in order to get
more information with less data [1]. Metamodelling means
using statistical regression for estimating the function of
dependency of the systems response on the factors, i.e.
systems input and/or parameters.
Although there are ways to improve the design, there is no
certainty that data is accurate enough. Data can contain
noise/variations induced internally or by the equipment used
for measurements. While the variation in the response caused
by the systems factors is of interest, the variation (noise)
introduced by the measurement equipment/environment can

Andi Buzo, Monica Rafaila, Manuel Harrant, Georg Pelz


Automotive Power
Infineon Technologies
Neubiberg, Germany

influence the data accuracy and the conclusions on the system


behavior and performance. A prediction or an estimate of the
amount of variation encountered is needed. Such estimation
can be used to evaluate the size and statistical significance of
the variability that can be attributed to the important factors.
In this paper, we want to show how two important DoE
concepts, i.e. blocking and replications, can be used in order to
improve the analysis of the electronic system performance in
the case of variations of both factors and measurements. These
concepts are used in the context of a metamodelling-based
analysis. The blocking principle is used to improve the
screening process i.e. process to identify the factors that affect
the response of interest. Blocking is used because the system
might have factors that are not in our interest but which affect
the experiments and therefore they should not be neglected. For
example, we are interested in how the parameters of a system
affect the response. The measurements on the system are made
using several batteries of the same kind. In this case, we are not
interested on the relation between the batterys voltage and the
response, but its value affects the response. By blocking
technique we can eliminate the effect of the battery so that it is
not accounted as a bigger noise added to the system.
Replications were added so that we can find answers to
questions such as [1]: does the metamodel fit the data?; what is
the confidence that the mean value of the response represents
its real value?; how many measurements are needed so that we
have an acceptable confidence interval?
Our paper contains a summary of the state-of-the-art in this
research field in Section II. Section III describes our proposed
approach on how to use the replication and blocking. An
example of how this approach can be applied on real-life
applications is given in section IV. Conclusions are drawn in
section V.
II. STATE OF THE ART SUMMARY
Design of experiments and metamodelling techniques are
used in the presented work. DoE is a concept that is discussed
in several papers found in literature [1, 2, 3]. After it was
successfully used in many domains, DoE is used nowadays in
verification of electronics. Several definitions and
implementations have been published. Blocking i.e. arranging
similar factors in groups, is mostly used to reduce known

variability and to improve precision. The replication principle


helps to estimate the experimental error and mean effects.
From our point of view, the most significant description of
experimental design in engineering is offered by Montgomery
[1]. In his work, DoE is defined as the process of planning
experiments in such a way that the data is collected in order to
obtain accurate results. Fisher emphasized in [2] the fact that in
order to avoid most of the problems encountered in analysis, it
is absolutely necessary to plan a design and conduct
experiments. Montgomery defines principles like replication,
randomization, blocking, and advises their usage so that better
results can be obtained. In his work, he used design techniques
like blocking to improve data precision and he explains why
replication i.e. the repetition of the experiment should be used.
In [3], Kirk defines DoE and specifies the way in which
replication should be used. He states that replication consists of
the analysis of two or more experimental units under the same
assumptions and can be used to obtain accurate estimation of
the error and treatment effects. Donnelly gives in [4] a more
practical definition for DoE. He states that it is efficient in
solving problems with fewer resources by running efficient
subsets of every possible combination. Also, in order to extract
the information regarding the responses variation, if
replication is used, one runs only the trials that are normally
needed.
Two types of factors are involved in experiments: design
factors, i.e. factors selected for analysis and nuisance factors,
i.e. factors which do not present interest in the present
experiment but can affect it. The nuisance factors can be
controllable or uncontrollable. In the first case, the
measurements can be grouped (in blocks) by the level of the
nuisance factors and treated as such. Such factors are also
called blocking factors. Examples of such blocking factors can
be the environment temperature at which the measurements are
made. If the nuisance factors are uncontrollable, but
measurable, methods such as analysis of covariance can be
used to eliminate their effect [1]. If the nuisance factors are
unknown and thus uncontrollable, then randomization i.e.
random allocation of the designs runs is used in order to lower
their effect. The last two issues are not addressed in this paper.
Table I summarizes the above classification.
TABLE I. NUISANCE FACTORS CLASSIFICATION. WAYS TO TREAT
FACTORS IN EXPERIMENTAL DESIGN

we can build a metamodel with all low-order interactions


among factors as indicated in equation (1):

2 n (o) ( o ) n-1 n
y ( x ) c ci xi cik x j xk
o
o1 i 1
j 1 k j 1

(1)

where y represents the response of the system, x are the factors,


is the random error and c represents the coefficients of the
metamodel. This coefficients represent the effects of the factors
on the response of interest as follows: ci(1) is the main effects,
ci(2) represent the quadratic effects and cik are the 2-factor
interaction effects [5].
At the DoE stage, the blocking factors are identified as
described in Section II. Experiments are repeated for each level
of the blocking factors. The experiments are completed with
replications in several points, i.e. the repetition of the
experiment with the same factor setting. After measurements
are made a screening process takes place. Its purpose is to
determine what design factors have real impact on the
response. Analysis of variance is an efficient procedure for
screening. If there are blocking factors, then the analysis of
variance is modified in order to take into account the variation
that caused it. The formulas for determining what factors are
relevant are described in detail in [1]. An example of such
procedure is given in the case study described in Section IV.
After screening, the metamodel is built only with the design
factors that influence the response. There are many ways to
determine whether the parameters of the metamodel are
estimated well or if the metamodel fits the data. The analysis of
the residuals is proved to be very successful. In this paper,
however, we want to demonstrate that replication can also be
used to determine the metamodels adequacy to the data. For
this we can use the so-called lack-of-fit test which assumes that
replications of experiments are made. The underlying principle
uses the variance of the measured data with regard to
metamodel. The regression for estimating the metamodels
parameters uses least squares criterion, which means that the
curve (or the surface) does not necessarily pass by the
2

measured point. The variation ( m ) of the data from the curve


(or surface) can be estimated and compared with the variation
of the measurements. If the first is smaller or comparable to the
second that means that the metamodel is adequate. The
2

Characteristics

Uncontrollable

Controllable

Examples

How to treat

Unknown

Experimenter bias

Randomization

Known and
measurable

Weight, previous
learning
Batch, time,
gender

Analysis of
covariance

Known

Blocking

III. THE PROPOSED APPROACH ON ANALYSIS IMPROVEMENT


In our approach, we improve the classical DoE and
regression methods by using two important concepts: blocking
and replication. We start by DoE, i.e. deciding the factor
settings for the experiment. The factor levels are chosen so that

variation of the measurements ( r ) can be estimated by the


means of replications. Equation (2) shows how these two
variances can be estimated [9]:
nu

1
n i [yi f(x i , )]2
m

(nu p) i 1

nu ni
1

[yi j yi ]2

r
(n nu ) i 1 j 1

(2)

where n is the size of the data used to fit de model, nu is the


number of unique combinations of predictor variable levels, p
is the number of the coefficients of the metamodel, ni is the
number of replications done for each configuration of factors,
yij is the value of the metamodel in the replicated
configuration and yi is the mean of the response at the ith
factor configuration. The squares of the two estimators in
2
equation (2) are called: m the mean square for lack of fit and

s
n

(5)

As expected the greater the number of the replications the


smaller the confidence interval for the estimated x will be. The
minimum number of replicates neccesary for a given precision,
can be established by imposing a threshold value for y .

r the mean square for the pure error. Their ratio has a
statistical importance and can tell about the adequacy of the
metamodel [9]:

m
r

(3)

L follows an F distribution indexed by the degrees of freedom


of the two estimators [9]. The test consists in comparing L
with the cut off value F0 for a given probability of accepting
the hypothesis 1-. If L is greater than F0, it means that the
metamodel is inadequate and maybe terms are missing in it.
On the contrary, the metamodel is fit properly. Indeed, in the
2
latter case the variance in the metamodel m is comparable
2
or smaller than the variance of the measurements r .
After the metamodel is verified it can be used for prediction
of the response in points where no measurements are made or it
can be used for finding the worst case for the response.
However, if the measurements are affected by noise, we will
not be sure if the conclusions we may draw are accurate. In this
case, the replications come again into hand because by their
means we can quantify the effect of noise. Hence, we can
calculate the confidence interval for the mean of the response
or in the complementary case, given a confidence interval, we
can calculate the number of replications needed in order to
have a response mean within that interval.
Since the replications are taken at the same point i.e.
identical settings of the factor levels, the variability that
appears from one run to another gives an indication of the
experimental error e.g. due to uncontrolled factors, unreliability
of the measurement instruments, etc. The sample mean x and
the sample variance s 2 are calculated by equation (4), where xi
represents the observed values of the sample and n the number
of replicates.
n

xi

i 1
x
n

1 n
2
s
( xi x)
n 1 i 1

(4)

x follows a t-Student distribution with n-1 degrees of freedom.


Hence the standard deviation of the x can be calculated as
follows [1]:

IV. CASE STUDY AND RESULTS


The approaches previously described can be applied on any
kind of system which provides information on the response
and the factors that are affecting it. Our case study considers a.
This system contains a smart high-side switch to turn on any
type of incandescent lamp (e.g. a direction indicator or a beam
head light). In [2] it is presented a new concept of an
application-driven characterization method for automotive
power micro-electronic devices. Different automotive power
application components like incandescent lamps, lithium-ion
batteries or electric motors were emulated in real-time by
running their physical equations on a FPGA and control the
load currents with a linear amplification concept. This
innovative concept allows to change all load parameters,
which are not adjustable when using physical components but
also intrinsic parameters like wire resistance or inductance of
the wiring harness connecting the device under test and the
power load.

Fig. 1: Application Emulation System

The equations characterizing the system were formulated in


LabView as block level models and later on automatically
converted into synthesizable and executable digital designs
ready for being processed on a FPGA [6,7]. The model was
evaluated based on the output voltage of the device under test
while the discrete-time values of the load current are
afterwards converted into analog voltage levels controlling the
power amplifier. This processing structure allows a current
flowing through the resistor which is equal to the electrical
behavior of the real power load, like in this case the
incandescent lamp. This current was our response of interest.
This replacement of physical automotive power loads with the
application emulation system allows tuning load parameters
and finding application cases, which are hard to find when
using real automotive loads.

The influencing factors of the lamp model were discovered


using sensitivity analysis methods such as evaluating the
partial derivate for every factor and observing the impact of
each factor on the response of interest [6]. In this case the
response is affected by eight factors coming from the
application. An overview of these factors for 21W
incandescent lamps and the response of interest are presented
in Table II.
TABLE II. FACTORS

OF INFLUENCE FOR THE

INCANDESCENT LAMP

three of the factors were eliminated initially because they were


not varied during measurements. That means that their degree
of freedom was 0. Table III is an ANOVA table with the
results for the first set. In the first column the factors are listed.
For each the sum of squares, degrees of freedom, means of
square and p values are calculated. It can be noticed that, in our
case, the first factor could be easily eliminated due to the fact
that its p value overpasses our threshold of 0.05. The second
factor might be a candidate for elimination. Note that the error
mean square has a considerable value i.e. 28.12.

MODEL AND THE RESPONSE OF INTEREST

Characteristic
Conductor Resistance Rwire
Conductor Inductance Lwire
Heat capacity of filament Cth,FIL
Thermal resistance of filament Rth,FIL
Filament temperature at nominal power
TFIL,nom
Filament resistance at nominal power RFil,nom
Ambient temperature TAmb
Battery voltage VBAT
Peak current

Role
Factor
Factor
Factor
Factor
Factor

TABLE III. ANALYSIS

OF VARIANCE APPLIED ON THE SET OF FACTORS


THAT DID NOT CONTAIN THE BLOCKING FACTOR

Factor
Factor
Blocking Factor
Response

Using the factors presented in Table II a design of 900 runs


was created insuring a good coverage of the verification space.
Five factors were defined on three levels i.e. minimum,
maximum and nominal, while the ambient temperature and the
heat capacity of the filament had other two intermediate levels.
From the initial design key configurations i.e. center and
corners were selected and each was replicated 10 times, adding
another 180 runs to the design. Measurements were done in the
lab according to the planned design.
To prove how the blocking principle works for screening
purposes we used analysis of variance (ANOVA). First we
applied this analysis on a set of factors containing only design
factors and afterwards we added to the analysis a nuisance
factor. We considered this scenario: our system should work on
any battery we have available and it is not of interest to see
how the response is affected by the supply. For this reason we
considered the blocking factor to be the battery voltage VBAT
it does not present interest in the present experiment but can
affect it.
The variance analysis consists in building ANOVA tables;
they are used to display how the variability of the data is
divided into the variability due to differences between the
factors levels and the remaining variability that cannot be
explained by any systematic source. The tables can be read
from left to right: source of the variability i.e. the factors of the
system, sum of squares (SS) due to each source, the degrees of
freedom (df) of each of the factors which is a number smaller
by 1 than the number of levels a factor has and the mean
squares which is in fact SS/df, the ratios of the mean square
which corresponds with F statistics, the p-values for the F
statistics. It is considered that the factors that have the p-value
bigger than 0.05 have no significant effect upon the response of
interest and can be eliminated from further analysis. A more
permissive threshold of 0.01 can also be used. In our analyses

The second analysis of variance was applied on the set of


factors containing the nuisance factor VBAT. Table IV presents
the results from the analysis. The columns in the table have the
same meaning as the one previously described. These results
also confirm that the first factor can be eliminated from further
analysis but proves that the decision of eliminating the second
factor would have been wrong its p value decreases to 0
showing that in fact that factor has an effect on the response
that should be taken into account. The big difference that can
be easily noticed from the two analyses is that the mean square
error has decreased from 28.12 to 0.6 which is an acceptable
value for this data. These last two remarks prove the
importance of taking into consideration the nuisance factor.
TABLE IV. ANALYSIS OF VARIANCE APPLIED ON A SET OF SIX FACTORS.
TO THE INITIAL SET THE BLOCKING FACTOR VBAT WAS ADDED (FACTOR X6
IN THE LIST)

After the screening process is fulfilled, data can be used to


build a metamodel for the system. In our case the metamodel
built was a quadratic metamodel as described by equation (1).
The number of factors was reduced after the screening at four.
The metamodel built had 15 coefficients i.e. four for the
quadratic effects, four for the main effects, six for the 2-factor

interaction and one single term. Although the number of factors


was reduced we have no certainty that the metamodel will fit
the data. Replications that were added to the initial design can
be used for three purposes:
1. determine the metamodel's adequacy to the data;
2. define the confidence interval for the mean
response;
3. determine the minimum number of replications
that are needed in order to measure the systems
response with a given accuracy.
In order to fulfill the first task we used the lack-of-fit test
that was described in Section III. The variation of the data from
2
the metamodel ( m ) and the variation of the measurements
2
( r ) were computed using equation (2) considering that: the
size of the data used to fit de model was n=784, the number of
unique combinations of predictor variable levels nu =159, the
number of the coefficients of the metamodel was p=15 and the
number of replicates for each configuration was ni ~=25. The
reason for which the number of replicates increased from 10 to
25 is that altough we used screening and we eliminated from
the analysis some factors, we did not eliminate that
measurements from that data. Having the values for the
estimators (the mean square for lack of fit and the mean square
for the pure error) the value of L that was easily computed. The
value for our data i.e. 1.0575 was calculated considering that
the numerator's and denominators degree of freedom are 144
and 625. It is considered to be smaller than an upper-tail cut-off
value from the F distribution table with a probability of 0.05.
According with tables from [11] the value is smaller than the
corresponding F0 which means that L follows the F distribution
indexed by the degrees of freedom of the two estimators and
the metamodel is adequate. In case the value of L was greater
than F0 then the metamodel was not fit and probably terms are
missing in it.
To demonstrate the second and third purpose of using
replication we applied equations (4) and (5) on replicated data.
The results that are obtained can used to generate the plot from
figure 2 i.e. standard deviations of mean responses versus the
number of replications. Each of the colored line represents each
key configuration that was replicated. The confidence interval
for the mean response is defined using the values of the
standard deviations of mean responses.
In order to define the minimum number of replicates that
are needed in order to measure the response with a given
accuracy it is enough to observe the behavior of standard
deviation of the mean response. For example: if we consider
the highest line and if we want a standard deviations of mean
response less than 0.5mA we should probably replicate the
configurations at least 4 times. It can be noticed that at some
point all values remains rather the same. Even if the number of
replicates is increased, no significant change will occur in the
value of the standard deviation of the response. So, we can say
that, for this lighting control system used in automotive
applications a number 5 replicates is enough. More replicates
would mean bigger costs.

Fig.2: Standard deviation of mean response vs. Number of replications

V. CONCLUSIONS
As there are no methods that quantify the variations that
occur in measurements, the paper intends to deliver methods
that improve the quality of data used in analysis. Experiments
play an important role in observing the effect of each factor,
but due to expenses only a limited number of experiments can
be performed. We used Design of Experiments technique to
obtain the necessary data. Principles such as replication and
blocking are implemented. Blocking helped to eliminate the
systematic error and was used for screening purposes. Using
ANOVA we proved the importance of taking into
consideration nuisance factors. Replications helped us to
determine that the built metamodel fits the data. The lack-of-fit
test was used in this purpose. We established how the
confidence interval for the mean response should be defined
and the minimum number of replications that are needed in
order to measure the systems response with a given accuracy
was also determined. The methods were applied on data
coming from a lighting control system used in automotive
applications that was emulated.
ACKNOWLEDGMENT
This research project is supported by the German
Government, Federal Ministry of Education and Research
under the grant number 01M3195 in the project RESCAR 2.0.
REFERENCES
[1] D. Montgomery, Design and Analysis of Experiments, John
Wiley & Sons, 2001.
[2] A. Fisher, The Arrangement of Field Experiments, Collected
papers relating to statistical and mathematical theory and
applications, Springer New York, 1992.
[3] R. E Kirk, Experimental Design, In R. Millsap & A. MaydeuOlivares, Sage handbook of quantitative methods in psychology,
pp. 2345, 2009.
[4] T. Donnelly, Design of Experiments for Real-World
Problems, NDIA 26th Annual National Test & Evaluation
Conference, March 2, 2010R.
[5] M. Rafaila, Planning Experiments for the Validation of
Electronic Control Units, Doctoral Thesis, Faculty of Electrical
Engineering and Information Technology, Vienna, 2010.

[6] M. Harrant, T. Nirmaier, G. Pelz, F. Dona, C. Grimm,


Configurable load emulation using FPGA and power amplifiers
for automotive power ICs, Specification and Design Languages
(FDL), 2012.
[7] M. Harrant, T. Nirmaier, J. Kirscher, C. Grimm, G. Pelz,
"Monte Carlo based post-silicon verification considering
automotive application variances," Ph.D. Research in
Microelectronics and Electronics (PRIME), 2013 9th
Conference on , vol., no., pp.165,168, 24-27 June 2013

[8] MATLAB,
Statistics
Toolbox,
http://www.mathworks.com/help/toolbox/stats/
[9] MATLAB,
Curve
Fitting
Toolbox,
http://www.mathworks.de/products/curvefitting/
[10] Handbook of National Institute of Standards and Technology
(NIST), http://www.itl.nist.gov
[11] Statistics Online Computational Resource (SOCR)
http://www.socr.ucla.edu/

Anda mungkin juga menyukai