Anda di halaman 1dari 3

Week 1

Frequency distributions Bar charts Pie charts Histograms: symmetry, skewness, mode., noo gaps between column Time series plot Bivariate relationships Measures of central tendency: MEAN, MEDIAN, MODE Measures of dispersion: RANGE, SD, VARIANCE, Z-SCORE Measures of association: COVARIANCE, COEEFFICIENT CORRELATION Measures of relative standing: PERCENTILES

Week 2

Week 3
Data Collection: collect, analyse, extract, communicate Independence: Multiplication Rule o P(A & B) = P(A)*P(B/A) / P(A & B) = P(B)*P(A/B) o If Independent P(A & B) = P(A)*P(B) Mutually exclusive: Addition Rule o P(A or B) = P(A) + P(B) P(A&B) o If Mutually Exclusive P(A or B) = P(A) + P(B) Joint probability: probability of A & B occurring at the same time Marginal probability: add across rows or down columns to find the probability of an event occurring Conditional probability Rules: 1. P(ef) = P(e and f) / P(f) = Probability of 'e' occurring given that 'f' has occurred. o Ratio of joint probability to marginal probability 2. Rearranging yields the multiplication rule for Joint Probability - P(e and f) = P(e |f)P(f) 3. If P(e|f) = P(e) - Conditioning has no effect e & f are said to be independent Permutations Combinations

Week 4
Random Variables Mathematical Expectation o Heads ($10 gain), Tails ($5 loss), Expected Value of the Game 'X' = E(X) = * 10 * 5 = 2 o Odds into probabilities E.g. $1.67 ALP [Pa], $2.15 Coalition [Pc] Assuming a fair game, 0.67*Pa 1(1-Pa) = 0, 1.67Pa= 1, Pa = 1/1.67 = 59.9% & Pc = 46.5% Pa + Pc 1.064 > 1 (6% profit margin for the betting agency) Rules of expectations o Laws of Expected Value E(c) = c E(X + c) = E(X) + c E(c*X) = c*E(X) E(X + Y) = E(X) + E(Y) o Laws of Expected Variance V(c) = 0 V (X + c) = V(X) V(c*X) = c^2 V(x) V(X + Y) = V(X) + V(Y) 2COV(X,Y)

Week 5
Binomial Distribution/Experiment Requirements

Fixed number of trials n 2 Possible outcomes, success/failure Success = p & failure = 1-p The trials are independent Each trial is a Bernoulli process The RV of a binomial experiment is the number of successes in n number of trials - The binomial random variable Binomial Random Variable
o

o o o o

Where n = # of trials and x = is the # of successes o Mean = = np o Variance = ^2 = np(1-p) (Standard Deviation = ) Uniform Distribution The function is uniformly distributed, meaning that the point on the y axis, F(x) = 1 / (b-a). (Assuming 'a' & 'b' are the 2 numbers on the x axis - as shown in the picture below).

Week 6
Normal Distribution Example X ~ (50,100) Find: P(45 x 60) Standardise Scores!!! o (45-50)/10 Z (60-50)/10 = -1/2 Z 1 = 0 Z 1/2 + 0 Z 1 (By Symmetry - Because P(-1/2 Z 0) = P(0 Z 1/2)). We do this because it is much easier to find the probability values in the standard normal table in this format = 0.3413 + 0.1915 = 0.5328 (Values from the standard normal table) X ~ (50,100) Find Z_(0.025) and unstandardise it to find the corresponding 'X' value = o = 1.96 Unstandardise!!!! (1.96 = X 50 / 10) (X = 50 + 19.6) = 69.6 Finding the Value of Z Given the Probabilities ZA = 100(1-A)^th Percentile Find The 97 Percentile P (Z > ZA) = A Z_(0.025) (top 2.5%) o P (Z > Z_0.025) = 0.025 1 - 0.025 = 0.9750 = 1.96 Other Questions Find the value of a standard normal RV where the probability that the RV is GREATER than it is 5% 95th Percentile = 1.645 Find the Value of a standard normal RV where the probability that the RV is LESS than it is 5% 5th Percentile = -1.645 (Symmetric) Concepts of Estimation Estimations objective is determine the approximate value of the parameter (e.g. sample mean for population mean. There are 2 types of estimators defined below: o Point Estimator: Draws inferences about a population by estimating the value of an unknown parameter using a single value (this will be wrong since a probability of a point on a continuous random variables probability density function is virtually 0) o Interval Estimator: Draws inferences about a population by estimating the value of an unknown parameter using an interval Characteristics of Good Estimators Unbiased - E(Estimator) = Parameter

Consistency - Difference between the estimator & parameters becomes smaller as sample size increases Relative Efficiency - If 2 unbiased estimators of a parameter, the one with the lower has relative efficiency.

Anda mungkin juga menyukai