© All Rights Reserved

11 tayangan

© All Rights Reserved

- Damping mechanism in welded structures
- Regression
- Regression Analysis_ Unified Concepts, Practical Applications, And Computer Implementation- (2015)
- predicting maximal grip strength using hand circumference.pdf
- MultRegTheory
- Peabody Picture Vocabulary Test-III Normative Data for Spanish-speaking Pediatric Population
- CorrRegrSAS
- Correlation Types
- 10s 12 Linear Regression
- Excel Statistical Analysis107.02.03
- Sas Simple Regression 2010
- HMCost3e_SM_Ch03.doc
- Regression
- The Effect of Inventory Management on Firm Performance
- Lecture 8 on Simple Linear Regression
- xin zhang final project
- indicatorslegitimacy
- Arg02_Quantitative Tools and Excel
- The Counseling Psychologist
- chapter4project-katiehanna

Anda di halaman 1dari 80

LEARNING GOAL

Explain the simple linear regression model

Obtain and interpret the simple linear regression

equation for a set of data

Describe R2 as a measure of explanatory power of the

regression model

Use a regression equation for prediction

Copyright 2009 Pearson Education, Inc.

Definition

A correlation exists between two variables when

higher values of one variable consistently go with

higher values of another variable or when higher

values of one variable consistently go with lower

values of another variable.

Slide 7.1- 2

amount of smoking and likelihood of lung

cancer; that is heavier smokers are more

likely to get lung cancer.

There is a correlation between the variables

height and weight for people; that is, taller

people tend to weigh more than shorter

people.

Slide 7.1- 3

demand for apples and price of apples; that

is, demand tends to decrease as price

increases.

There is a correlation between practice time

and skill among piano players; that is, those

who practice more tend to be more skilled.

Slide 7.1- 4

Scatter Diagrams

Definition

A scatter diagram (or scatterplot) is a graph in

which each point represents the values of two

variables.

Slide 7.1- 5

Types of Correlation

(Note: detailed descriptions of these graphs appear in the next few slides.)

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 6

in which the values of y tend to increase with increasing

values of x. The correlation becomes stronger as we

proceed from a to c. In fact, c shows a perfect positive

correlation, in which all the points fall along a straight

line.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 7

in which the values of y tend to decrease with

increasing values of x. The correlation becomes

stronger as we proceed from d to f. In fact, f shows a

perfect negative correlation, in which all the points fall

along a straight line.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 8

and y. In other words, values of x do not appear to be

linked to values of y in any way.

Slide 7.1- 9

which x and y appear to be related but the relationship

does not correspond to a straight line. (Linear means

along a straight line, and nonlinear means not along a

straight line.)

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 10

Types of Correlation

Positive correlation: Both variables tend to increase (or

decrease) together.

Negative correlation: The two variables tend to change

in opposite directions, with one increasing while the other

decreases.

No correlation: There is no apparent (linear) relationship

between the two variables.

Nonlinear relationship: The two variables are related,

but the relationship results in a scatter diagram that does

not follow a straight-line pattern.

Slide 7.1- 11

Correlation

Statisticians measure the strength of a

correlation with a number called the

correlation coefficient, represented by

the letter r.

Slide 7.1- 12

The correlation coefficient, r, is a measure of the

strength of a correlation. Its value can range only from

-1 to 1.

If there is no correlation, the points do not follow any

ascending or descending straightline pattern, and the

value of r is close to 0.

If there is a positive correlation, the correlation

coefficient is positive (0 < r 1): Both variables increase

together. A perfect positive correlation (in which all the

points on a scatter diagram lie on an ascending straight

line) has a correlation coefficient r = 1. Values of r close

to 1 mean a strong positive correlation and positive

values closer to 0 mean a weak positive correlation.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 13

(cont,)

If there is a negative correlation, the correlation

coefficient is negative (-1 r < 0): When one variable

increases, the other decreases. A perfect negative

correlation (in which all the points lie on a descending

straight line) has a correlation coefficient r = -1. Values

of r close to -1 mean a strong negative correlation and

negative values closer to 0 mean a weak negative

correlation.

Slide 7.1- 14

The formula for the (linear) correlation coefficient r can be

expressed in several different ways that are all algebraically

equivalent, which means that they produce the same value. The

following expression has the advantage of relating more

directly to the underlying rationale for r :

Slide 7.1- 15

Beware of Outliers

If you calculate

the correlation coefficient

for these data, youll find

that it is a relatively high

r = 0.880, suggesting a

very strong correlation.

Figure 7.10

However, if you cover the data point in the upper right corner of

Figure 7.10, the apparent correlation disappears.

In fact, without this data point, the correlation coefficient is r = 0.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 16

Youve conducted a study to determine how the number of

calories a person consumes in a day correlates with time spent

in vigorous bicycling. Your sample consisted of ten women

cyclists, all of approximately the same height and weight. Over

a period of two weeks, you asked each woman to record the

amount of time she spent cycling each day and what she ate on

each of those days. You used the eating records to calculate the

calories consumed each day.

Figure 7.11 shows a scatter diagram

with each womans mean time spent

cycling on the horizontal axis and

mean caloric intake on the vertical

axis. Do higher cycling times

correspond to higher intake

of calories?

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 17

probably tell you that there is a positive correlation in which

greater cycling time tends to go with higher caloric intake. But

the correlation is very weak, with a correlation coefficient of

r = 0.374.

However, notice that two points are

outliers: one representing a cyclist

who cycled about a half-hour per

day and consumed more than 3,000

calories, and the other representing

a cyclist who cycled more than 2

hours per day on only 1,200 calories.

Its difficult to explain the two outliers, given that all the women

in the sample have similar heights and weights.

Slide 7.2- 18

Solution: (cont.)

We might therefore suspect that these two women either recorded

their data incorrectly or were not following their usual habits

during the two-week study. If we can confirm this suspicion, then

we would have reason to delete the two data points as invalid.

Figure 7.12 shows that the correlation

is quite strong without those two

outlier points, and suggests that the

number of calories consumed rises by

a little more than 500 calories for

each hour of cycling.

Figure 7.12 The data from Figure

Of course, we should not remove

7.11 without the two outliers.

the outliers without confirming our

suspicion that they were invalid data points, and we should report

our reasons for leaving them out.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 19

Correlations can also be misinterpreted when data are grouped

inappropriately. In some cases, grouping data

hides correlations.

Consider a (hypothetical) study in which

researchers seek a correlation between hours

of TV watched per week and high school

grade point average (GPA). They collect the

21 data pairs in Table 7.3.

The scatter diagram (Figure 7.13) shows

virtually no correlation; the correlation

coefficient for the data is

about r = -0.063. The apparent conclusion

is that TV viewing habits are unrelated to

academic achievement.

Figure 7.13

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 20

students watched mostly educational programs, while others

tended to watch comedies, dramas, and movies. She therefore

divides the data set into two groups, one for the students who

watched mostly educational television and one for the other

students.

Table 7.4

shows her

results with

the students

divided into

these two

groups.

Slide 7.2- 21

strong positive correlation for the students who watched

educational programs (r = 0.855) and a strong negative

correlation for the other students (r = -0.951).

Figure 7.14 These scatter diagrams show the same data as Figure 7.13,

separated into the two groups identified in Table 7.4.

Slide 7.2- 22

actually exists among subgroups.

Figure 7.15 shows the scatter diagram of the (hypothetical)

data collected by a consumer group studying the relationship

between the weights and prices of cars.

Figure 7.15 Scatter diagram for the car weight and price data.

is no correlation within either cluster.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 23

Perhaps the most important caution about interpreting

correlations is one weve already mentioned:

Correlation does not necessarily imply causality.

1. The correlation may be a coincidence.

2. Both correlation variables might be directly

influenced by some common underlying cause.

3. One of the correlated variables may actually be a

cause of the other. But note that, even in this case, it

may be just one of several causes.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 24

Definition

The best-fit line (or regression line) on a scatter

diagram is a line that lies closer to the data points

than any other possible line (according to a

standard statistical measure of closeness).

Slide 7.3- 25

Cautions in Making Predictions from Best-Fit Lines

1. Dont expect a best-fit line to give a good prediction

unless the correlation is strong and there are many

data points. If the sample points lie very close to the

best-fit line, the correlation is very strong and the

prediction is more likely to be accurate. If the sample

points lie away from the best-fit line by substantial

amounts, the correlation is weak and predictions tend

to be much less accurate.

2. Dont use a best-fit line to make predictions beyond

the bounds of the data points to which the line was fit.

Slide 7.3- 26

(cont.)

3. A best-fit line based on past data is not necessarily

valid now and might not result in valid predictions of

the future.

4. Dont make predictions about a population that is

different from the population from which the sample

data were drawn.

5. Remember that a best-fit line is meaningless when

there is no significant correlation or when the

relationship is nonlinear.

Slide 7.3- 27

State whether the prediction (or implied prediction) should be

trusted in each of the following cases, and explain why or why

not.

a. Youve found a best-fit line for a correlation between the

number of hours per day that people exercise and the

number of calories they consume each day. Youve used this

correlation to predict that a person who exercises 18 hours

per day would consume 15,000 calories per day.

Solution:

a. No one exercises 18 hours per day on an ongoing basis, so

this much exercise must be beyond the bounds of any data

collected. Therefore, a prediction about someone who

exercises 18 hours per day should not be trusted.

Copyright 2009 Pearson Education, Inc.

Slide 7.3- 28

State whether the prediction (or implied prediction) should be

trusted in each of the following cases, and explain why or why

not.

b. There is a well-known but weak correlation between SAT

scores and college grades. You use this correlation to predict

the college grades of your best friend from her SAT scores.

Solution:

b. The fact that the correlation between SAT scores and college

grades is weak means there is much scatter in the data. As a

result, we should not expect great accuracy if we use this

weak correlation to make a prediction about a single

individual.

Slide 7.3- 29

relationship between two variables:

Y = 0 + 1 X

Where Y is the dependent variable and

X is the independent variable

0 is the Y-intercept

1 is the slope

Ch. 11-30

using a Least Squares Regression technique

data, is

y b0 b1x

Cov(x, y)

b1

s2x

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

b 0 y b1x

Ch. 11-31

Introduction to

Regression Analysis

the value of at least one independent variable

variable on the dependent variable

(also called the endogenous /response variable)

the dependent variable

(also called the exogenous/explanatory variable)

Ch. 11-32

described by a linear function

changes in X

Yi 0 1x i i

coefficients and is a random error term.

Ch. 11-33

Model

The population regression model:

Population

Y intercept

Dependent

Variable

Population

Slope

Coefficient

Independent

Variable

Random

Error

term

Yi 0 1Xi i

Linear component

Random Error

component

Ch. 11-34

Model

(continued)

Yi 0 1Xi i

Observed Value

of Y for Xi

Predicted Value

of Y for Xi

Slope = 1

Random Error

for this Xi value

Intercept = 0

Xi

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

X

Ch. 11-35

Equation

The simple linear regression equation provides an

estimate of the population regression line

Estimated

(or predicted)

y value for

observation i

Estimate of

the regression

Estimate of the

regression slope

intercept

y i b0 b1x i

Value of x for

observation i

ei ( y i - y i ) y i - (b0 b1x i )

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-36

of b0 and b1 that minimize the sum of the

squared differences between y and y :

min SSE min ei2

min (y i y i )2

min [y i (b0 b1x i )]2

Differential calculus is used to obtain the

coefficient estimators b0 and b1 that minimize SSE

Ch. 11-37

(continued)

n

b1

(x x)(y y)

i1

2

(x

x

)

i

sy

Cov(x, y)

rxy

2

sx

sx

i1

b0 y b1x

Ch. 11-38

Equation

regression results in this chapter, will be

found using a computer

Ch. 11-39

Assumptions

of X, plus random error)

The error terms, i are independent of the x values

The error terms are random variables with mean 0 and

constant variance, 2

(the constant variance property is called homoscedasticity)

2

E[ i ] 0 and E[ i ] 2

for (i 1, , n)

another, so that

E[ i j ] 0

for all i j

Ch. 11-40

Interpretation of the

Slope and the Intercept

when the value of x is zero (if x = 0 is

in the range of observed x values)

average value of y as a result of a

one-unit change in x

Ch. 11-41

Example

relationship between the selling price of a home

and its size (measured in square feet)

Dependent variable (Y) = house price in $1000s

Independent variable (X) = square feet

Ch. 11-42

House Price Model

House Price in $1000s

(Y)

Square Feet

(X)

245

1400

312

1600

279

1700

308

1875

199

1100

219

1550

405

2350

324

2450

319

1425

255

1700

Ch. 11-43

Graphical Presentation

Ch. 11-44

measures of goodness of fit for regression

Ch. 11-45

(continued)

Ch. 11-46

Excel Output

Ch. 11-47

Excel Output

(continued)

Regression Statistics

Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

41.33032

Observations

ANOVA

10

df

SS

MS

F

11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients

Intercept

Square Feet

Standard Error

t Stat

P-value

Significance F

0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Ch. 11-48

Graphical Presentation

regression line

Slope

= 0.10977

Intercept

= 98.248

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-49

Interpretation of the

Intercept, b0

house price 98.24833 0.10977 (square feet)

value of X is zero (if X = 0 is in the range of

observed X values)

just indicates that, for houses within the range of

sizes observed, $98,248.33 is the portion of the

house price not explained by square feet

Ch. 11-50

Interpretation of the

Slope Coefficient, b1

house price 98.24833 0.10977 (square feet)

average value of Y as a result of a oneunit change in X

house increases by .10977($1000) = $109.77, on

average, for each additional one square foot of size

Ch. 11-51

Measures of Variation

SST

SSR

SSE

Total Sum of

Squares

Regression Sum

of Squares

Error Sum of

Squares

SST (y i y)2

SSR (y i y)2

SSE (y i y i )2

where:

y = Predicted value of y for the given x value

i

i

Ch. 11-52

Measures of Variation

(continued)

mean, y

Explained variation attributable to the linear

relationship between x and y

relationship between x and y

Ch. 11-53

Measures of Variation

(continued)

Y

yi

2

SSE = (yi - yi )

_2

SSR = (yi - y)

_

y

xi

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

_

y

X

Ch. 11-54

Coefficient of Determination, R2

of the total variation in the dependent variable

that is explained by variation in the

independent variable

The coefficient of determination is also called

R-squared and is denoted as R2

SSR regression sum of squares

R

SST

total sum of squares

2

note:

0 R 1

Ch. 11-55

Examples of Approximate

r2 Values

Y

r2 = 1

r2 = 1

explained by variation in X

r =1

2

between X and Y:

Ch. 11-56

Examples of Approximate

r2 Values

Y

0 < r2 < 1

between X and Y:

Some but not all of the

variation in Y is explained

by variation in X

X

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-57

Examples of Approximate

r2 Values

r2 = 0

No linear relationship

between X and Y:

r2 = 0

depend on X. (None of the

variation in Y is explained

by variation in X)

Ch. 11-58

Excel Output

Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

house prices is explained by

variation in square feet

41.33032

Observations

ANOVA

SSR 18934.9348

R

0.58082

SST 32600.5000

2

Regression Statistics

10

df

SS

MS

F

11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients

Intercept

Square Feet

Standard Error

t Stat

P-value

Significance F

0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Ch. 11-59

Correlation and R2

simple regression is equal to the simple

correlation squared

R r

2

2

xy

Ch. 11-60

Estimation of Model

Error Variance

error is

n

2

e

i

SSE

s

n2 n2

2

2

e

i1

model uses two estimated parameters, b0 and b1, instead of one

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-61

Excel Output

Regression Statistics

Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

41.33032

Observations

ANOVA

s e 41.33032

10

df

SS

MS

F

11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients

Intercept

Square Feet

Standard Error

t Stat

P-value

Significance F

0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Ch. 11-62

se is a measure of the variation of observed y

values from the regression line

Y

small se

large se

of the y values in the sample data

i.e., se = $41.33K is moderately small relative to house prices in

the $200 - $300K range

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-63

Regression Model

(b1) is estimated by

2

2

s

s

e

e

s 2b1

2

2

(xi x) (n 1)s x

where:

sb1

SSE

se

n2

Ch. 11-64

Excel Output

Regression Statistics

Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

Observations

ANOVA

sb1 0.03297

41.33032

10

df

SS

MS

F

11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients

Intercept

Square Feet

Standard Error

t Stat

P-value

Significance F

0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Ch. 11-65

the Slope

Sb1 is a measure of the variation in the slope of regression

Y

small Sb1

large Sb1

Ch. 11-66

t Test

H0: 1 = 0

H1: 1 0

(linear relationship does exist)

Test statistic

b1 1

t

sb1

d.f. n 2

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

where:

b1 = regression slope

coefficient

1 = hypothesized slope

sb1 = standard

error of the slope

Ch. 11-67

t Test

(continued)

House Price

in $1000s

(y)

Square Feet

(x)

245

1400

312

1600

279

1700

308

1875

199

1100

219

1550

405

2350

324

2450

319

1425

255

1700

house price 98.25 0.1098 (sq.ft.)

Does square footage of the house

affect its sales price?

Ch. 11-68

t Test Example

H0: 1 = 0

H1: 1 0

Coefficients

Intercept

Square Feet

b1

Standard Error

sb1

t Stat

P-value

98.24833

58.03348

1.69296

0.12892

0.10977

0.03297

3.32938

0.01039

b1 1 0.10977 0

t

3.32938

t

sb1

0.03297

Ch. 11-69

t Test Example

(continued)

H0: 1 = 0

H1: 1 0

Coefficients

Intercept

Square Feet

d.f. = 10-2 = 8

t8,.025 = 2.3060

/2=.025

Reject H0

/2=.025

Do not reject H0

-tn-2,/2

-2.3060

Reject H0

tn-2,/2

2.3060 3.329

b1

Standard Error

sb1

t Stat

P-value

98.24833

58.03348

1.69296

0.12892

0.10977

0.03297

3.32938

0.01039

Decision:

Reject H0

Conclusion:

There is sufficient evidence

that square footage affects

house price

Ch. 11-70

t Test Example

(continued)

P-value = 0.01039

H0: 1 = 0

H1: 1 0

Coefficients

Intercept

Square Feet

the p-value is

P(t > 3.329)+P(t < -3.329)

= 0.01039

(for 8 d.f.)

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

P-value

Standard Error

t Stat

P-value

98.24833

58.03348

1.69296

0.12892

0.10977

0.03297

3.32938

0.01039

Reject H0

Conclusion:

There is sufficient evidence

that square footage affects

house price

Ch. 11-71

for the Slope

Confidence Interval Estimate of the Slope:

d.f. = n - 2

Coefficients

Intercept

Square Feet

Standard Error

t Stat

P-value

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

the slope is (0.0337, 0.1858)

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-72

for the Slope

(continued)

Coefficients

Intercept

Square Feet

Standard Error

t Stat

P-value

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

$1000s, we are 95% confident that the average

impact on sales price is between $33.70 and

$185.80 per square foot of house size

This 95% confidence interval does not include 0.

Conclusion: There is a significant relationship between

house price and square feet at the .05 level of significance

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-73

Prediction

predict a value for y, given a particular x

value is

y n1 b0 b1x n1

Ch. 11-74

Predictions Using

Regression Analysis

Predict the price for a house

with 2000 square feet:

98.25 0.1098(200 0)

317.85

The predicted price for a house with 2000

square feet is 317.85($1,000s) = $317,850

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-75

only predict within the relevant range of data

Relevant data range

Risky to try to

extrapolate far

beyond the range

of observed Xs

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-76

Correlation Analysis

strength of the association (linear relationship)

between two variables

relationship

Ch. 11-77

Correlation Analysis

denoted (the Greek letter rho)

r

where

s xy

s xy

sxsy

(x x)(y y)

n 1

Ch. 11-78

association,

H0 : 0

distribution with (n 2 ) degrees of freedom:

r (n 2)

(1 r )

2

Ch. 11-79

Decision Rules

Hypothesis Test for Correlation

Lower-tail test:

Upper-tail test:

Two-tail test:

H0: 0

H1: < 0

H0: 0

H1: > 0

H0: = 0

H1: 0

-t

Where t

r (n 2)

(1 r )

2

/2

-t/2

/2

t/2

or t > tn-2,

has n - 2 d.f.

Ch. 11-80

- Damping mechanism in welded structuresDiunggah olehIJEC_Editor
- RegressionDiunggah olehtariqravian
- Regression Analysis_ Unified Concepts, Practical Applications, And Computer Implementation- (2015)Diunggah olehmedo91
- predicting maximal grip strength using hand circumference.pdfDiunggah olehGreis Rguez
- MultRegTheoryDiunggah olehSarbarup Banerjee
- Peabody Picture Vocabulary Test-III Normative Data for Spanish-speaking Pediatric PopulationDiunggah olehMiguelySusy Ramos-Rojas
- CorrRegrSASDiunggah olehdkanand86
- Correlation TypesDiunggah olehZaid Ahmad
- 10s 12 Linear RegressionDiunggah olehalkmind
- Excel Statistical Analysis107.02.03Diunggah olehManoj Kumar
- Sas Simple Regression 2010Diunggah olehngyncloud
- HMCost3e_SM_Ch03.docDiunggah olehAnna Antonio
- RegressionDiunggah olehapi-3805289
- The Effect of Inventory Management on Firm PerformanceDiunggah olehMidhun S Madhu
- Lecture 8 on Simple Linear RegressionDiunggah olehkurikong111
- xin zhang final projectDiunggah olehapi-386201662
- indicatorslegitimacyDiunggah olehapi-354312348
- Arg02_Quantitative Tools and ExcelDiunggah olehAndrea Franzini
- The Counseling PsychologistDiunggah olehWisnu Agung Wiyangga
- chapter4project-katiehannaDiunggah olehapi-345883789
- Epistemology and Gender .pdfDiunggah olehkezia
- A PC Based Ergonomic Analysis Software Package for Underground Mining EquipmentDiunggah olehrannscribd
- 06exam1Diunggah olehPETER
- 20070116122020.44[1]53-58Diunggah olehjhonaroni
- Sec F_Group 7_Multivariate_CrossSectional.pdfDiunggah olehSambeet Mallick
- Multiple Linear Regression Instructions RevDiunggah olehjegosss
- 081370030Diunggah olehDusto Chele
- Interpertasi AkbarDiunggah olehDede Irawan

- Testingafter midterm 1.pptxDiunggah olehNataliAmiranashvili
- statistic print.docxDiunggah olehNataliAmiranashvili
- Testing Practice after midterm 1.pdfDiunggah olehNataliAmiranashvili
- regression after midterm 5.pptDiunggah olehNataliAmiranashvili
- Stat_II_12_practice after midterm 4.pdfDiunggah olehNataliAmiranashvili
- Regression New.pptDiunggah olehNataliAmiranashvili
- quiz 4_practice.pdfDiunggah olehNataliAmiranashvili
- quiz 3_practice.pdfDiunggah olehNataliAmiranashvili
- Quiz 3 practicE.pdfDiunggah olehNataliAmiranashvili
- ps12_sol.docDiunggah olehNataliAmiranashvili
- print 4.pdfDiunggah olehNataliAmiranashvili
- SampleFinalMCDiunggah olehGamal Ali
- print 2.pdfDiunggah olehNataliAmiranashvili
- PRACTICE_regression after midterm 4.pdfDiunggah olehNataliAmiranashvili
- Practice quiz 1 new.pdfDiunggah olehNataliAmiranashvili
- Practice 2.pdfDiunggah olehNataliAmiranashvili
- Practice 1.pdfDiunggah olehNataliAmiranashvili
- III WEEK.Quiz practice.pdfDiunggah olehNataliAmiranashvili
- II WEEK. Practice_2-II.pdfDiunggah olehNataliAmiranashvili
- II WEEK. lesson_2-II.pptDiunggah olehNataliAmiranashvili
- I WEEK. Lecture 1 ppt.pptDiunggah olehNataliAmiranashvili
- FROM INTERNET Q3 .pdfDiunggah olehNataliAmiranashvili
- download.pdfDiunggah olehNataliAmiranashvili
- Correlation after midterm 3.pptDiunggah olehNataliAmiranashvili
- Anova Practice after midterm 2.pdfDiunggah olehNataliAmiranashvili
- Anova after midterm 2.pptDiunggah olehNataliAmiranashvili

- SolutionDiunggah olehMuhammad Arham
- 16128_FULLTEXTDiunggah olehMuhammad Iqbal Pratama
- EBTM4203_AsgnmtDiunggah olehNordin Yusof
- 5016707 (1)Diunggah olehRakesh GN
- Smoothed Bootstrap Nelson-Siegel Revisited June 2010Diunggah olehJaime Maihuire
- Taguchi’s Design of Experiments and Selection of Orthogonal ArrayDiunggah olehBhavin Desai
- Maru Case SolutionDiunggah olehpexao87
- Regression EquationDiunggah olehMuhammad Tariq
- SPSS[module 2_instructor_Aug2016].pdfDiunggah olehAndy Tan WX
- Lecture 10Diunggah olehibmr
- DualityDiunggah olehSuprabhat Tiwari
- hw1Diunggah olehteena kumari
- Multiple RegressionDiunggah olehPraneeth Bakki
- Box BehnkenDiunggah olehOana Saghin
- 1788Diunggah olehziabutt
- Assignment Problem - NotesDiunggah olehjeganrajraj
- Handout TRXDiunggah olehGANESH MANJHI
- Project CASE DCS.pdfDiunggah oleharafatasghar
- time valueDiunggah olehVidya K Nair
- BA5106 SOM Important Questions 2by2results.pdfDiunggah olehdivyasadagopan
- chapter_3Diunggah olehAhmad Nur Fawaid
- IP540Diunggah olehAlrick Barwa
- Matlab Optimization Toolbox Mathworks.irDiunggah olehMohammad Reza Madadi
- Chapter 7 - Linear Programming ModelDiunggah olehCriscian Biete
- FORMULAE_MLC.pdfDiunggah olehCésar Ávila
- Exam1(523)-key.docDiunggah olehKelly Liang
- Chi Dist TableDiunggah olehjohnt92
- 109710 Teori Kep 7 Keputusan Rute Baru2Diunggah olehMuhammad Khidir Danial
- Statistics AssignmentDiunggah olehVictor 李家铭 Lee
- PoissonDiunggah olehVincent Vetter