Econometrics
Autor: Rachel • March 4, 2018 • 2,004 Words (9 Pages) • 715 Views
...
P(X=x,Y=y)=P(X=x)P(Y=y)
Covariance – the extent to which 2 random variables move together
If independent, co-variance is 0. But if co-variance is 0, it does not necessarily mean they are independent.
If positive, data is moving in the same direction
If negative, data is moving in opposite directions
σ_XY=cov(X,Y)=E[(X-E(X))(Y-E(Y))]
σ_XY=E(XY)-E(X)E(Y)
= ∑_(i=1)^k▒∑_(j=1)^l▒〖(x_j-μ_X )(y_i-μ_y ) P(X=x_j,Y=y_i)〗
Correlation – covariance between X and Y, divided by their standard deviation (unitless)
-1≤corr(X,Y)≤1
If the conditional mean of Y does not depend on X, then X and Y are uncorrelated
E(Y│X)=μ_y→cov(Y,X)=0,corr(Y,X)=0
If X and Y are uncorrelated, it does not necessarily mean that the conditional mean of Y given X does not depend on X
Additional properties:
E(X+Y)=E(X)+E(Y)
E(a+bX+cY)=a+bE(X)+cE(Y)
var(a+bY)=b^2 var(Y)
var(aX+bY)
=a^2 var(X)+2abcov(X,Y)+b^2 var(Y)
cov(a+bX,c+dY)=bdcov(X,Y)
E(Y^2 )=σ_Y^2+μ_Y^2
cov(a+bX+cV,Y)=bcov(X,Y)+cov(V,Y)
E(XY)=cov(X,Y)+E(X)E(Y)
var(X+Y)=var(X)+var(Y)+2cov(X,Y)
Bernoulli distribution- outcome is either 0 or 1
p=probability of success
P(x=0) = 1-p P(x=1) = p
Mean: p Variance: p(1-p)
Binomial Distribution – n Bernoulli trials that are independent of each other
f(x)=(n¦x) p^x (1-p)^(n-x)
Mean: np Variance: np(1-p)
Poisson distribution
f(x)= (e^(-λ) λ^x)/x!
Mean and Variance: λ
Continuous Uniform Distribution
f(x)={█(1/(b-a) for a≤x≤b@0 for x<a or x>b)┤
Normal distribution N(µ, σ2)
f(x)=1/(σ√2π) exp(-1/2 (x-μ)^2/σ^2
Standardize the normal distribution using:
Z=(x-μ)/(σ/√n)
Skewness is 0, kurtosis is 3
Lognormal Distribution – a random variable X has a lognormal distribution if its natural logarithm Y=ln X, is normally distributed
Chi-squared Distribution – the distribution of the sum of m squared independent standard normal random variables (m=degrees of freedom)
Student t distribution – with m degrees of freedom is the distribution of the ratio of a standard normal random variable, divided by the square root of an independently distributed chi-squared random variable with m degrees of freedom, divided by m
When m is > 30, it is approximately standard normal
F Distribution – with m and n degrees of freedom, distribution of the ratio of a chi-squared random variable with df m, divided by m, to an independently distributed chi-squared random variable with df n, divided by n.
If n is large enough, it can be approximated by Fm,infinity
Random Sample – independently and identically distributed; has joint density of:
fx1,fx2,(x1,x2,.,xn)=f(x1)f(x2)…f(xn)
Sample Mean – an estimator of the mean of the population
X ̅= (X_1+X_2+⋯+X_n)/n=1/n ∑_(i-1)^n▒X_i
Mean: E(X ̅ )=1/n ∑_(i-1)^n▒〖E(X_i)〗=μ
Variance: var(X ̅ )=σ^2/n
Standard Deviation
S^2=1/(n-1) ∑_(i=1)^n▒(X_i-X ̅ )^2 = σ/√n
Sampling from Normal Distributions - Let X be a random sample from a distribution N(µ, σ2), X ̅ has a normal distribution with mean µ and variance σ2/n
Law of Large Numbers (consistency) - X ̅ will be near µ with a very high probability when n is large.
Central Limit Theorem – when n is large, the distribution of X ̅ is approximately normal
Z_n= (∑_(i-1)^n▒〖X ̅_n-μ〗)/(σ/√n)
With mean 0, variance, 1
Estimator Properties:
Unbiased E(μ ̂ )=μ
Consistent μ ̂ → μ
Efficient μ ̂ is more efficient than μ ̃ if var(μ ̂ )<var(μ ̃)
X ̅ is BLUE – Best Liner Unbiased Estimator
Hypothesis Testing
Null hypothesis H_0: θ=θ_0
Alternative hypothesis H_1: θ≠θ_0
p-value – significance probability (exact level of significance), area of the tails in the distribution of X ̅
p-value=2ϕ(-|(X ̅^act-μ_(X,0))/σ_X ̅ |
t-statistic =standardized sample average
t=(X ̅-μ_(X,0))/(SE(X ̅))
p-value=2ϕ(-|t^act |)
Type I Error – reject H_0 when it is true
Type II Error –do not reject H_0 when is is false
Level of Significance – α = probability of type I error
Power of the test – 1- β
β - probability of committing a type II error
Test of Significance Approach
Compute
...