# 1 Causal Modeling

• X causes Y. Eating too many cookies will make you gain weight
• X -> Y ; Cookies -> Wt
• We need to make sure that the cause is not spurious.
• Causal Path analysis

## 1.1 Direct and Indirect Effects

• Direct: X1 -> Y (which can control for X2)
• Indirect X1 -> X2 -> Y ; Cookies -> Wt (moderated by exercise)
• We will look at these concepts more in detail when we talk moderation analysis

### 1.1.1 Back to our last weeks example:

• Model we used was Predicted Happiness (Y) ~ Slope Ice Cream(X1) + Slope Brownies (x2) + Intercept -in formulas:
#packages we will need to conduct to create and graph our data
library(MASS) #create data
library(car) #graph data
py1 =.6 #Cor between X1 (ice cream) and happiness
py2 =.4 #Cor between X2 (Brownies) and happiness
p12= .2 #Cor between X1 (ice cream) and X2 (Brownies)

Means.X1X2Y<- c(10,10,10) #set the means of X and Y variables
CovMatrix.X1X2Y <- matrix(c(1,p12,py1,
p12,1,py2,
py1,py2,1),3,3) # creates the covariate matrix

#build the correlated variables. Note: empirical=TRUE means make the correlation EXACTLY r.
set.seed(42)
CorrDataT<-mvrnorm(n=100, mu=Means.X1X2Y,Sigma=CovMatrix.X1X2Y, empirical=TRUE)
#Covert them to a "Data.Frame", which is like SPSS data window
CorrDataT<-as.data.frame(CorrDataT)
#lets add our labels to the vectors we created
colnames(CorrDataT) <- c("IceCream","Brownies","Happiness")

###############Model 1
Ice.Model<-lm(Happiness~ IceCream, data = CorrDataT)
summary(Ice.Model)
##
## Call:
## lm(formula = Happiness ~ IceCream, data = CorrDataT)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -1.62669 -0.59548  0.03645  0.64933  2.01161
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  4.00000    0.81211   4.925 3.42e-06 ***
## IceCream     0.60000    0.08081   7.425 4.19e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8041 on 98 degrees of freedom
## Multiple R-squared:   0.36,  Adjusted R-squared:  0.3535
## F-statistic: 55.13 on 1 and 98 DF,  p-value: 4.193e-11
###############Model 2
Ice.Brown.Model<-lm(Happiness~ IceCream+Brownies, data = CorrDataT)
summary(Ice.Brown.Model)
##
## Call:
## lm(formula = Happiness ~ IceCream + Brownies, data = CorrDataT)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -2.03238 -0.49255 -0.08491  0.55931  1.74670
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  1.66667    0.98236   1.697 0.092981 .
## IceCream     0.54167    0.07743   6.995 3.41e-10 ***
## Brownies     0.29167    0.07743   3.767 0.000284 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7549 on 97 degrees of freedom
## Multiple R-squared:  0.4417, Adjusted R-squared:  0.4302
## F-statistic: 38.37 on 2 and 97 DF,  p-value: 5.301e-13

### 1.1.2 Plots

• Now piloting it a bit more complex as we have two predictors
• Simple scatter plots are not semi-partialed
• Also while we are not interacting the two variables, each has an effect on the DV
• We must visualize both effects
• There are different suggestions on how to do this [we will come back to differences in plotting later]
#packages we will need to plot our results
library(effects)
#plot individual effects
Ice.Brown.Model.Plot <- allEffects(Ice.Brown.Model, xlevels=list(IceCream=seq(8, 12, 1),Brownies=seq(8, 12, 1)))
plot(Ice.Brown.Model.Plot, 'IceCream', ylab="Happiness")

plot(Ice.Brown.Model.Plot, 'Brownies', ylab="Happiness")

#plot both effects
Ice.Brown.Model.Plot2<-Effect(c("IceCream", "Brownies"), Ice.Brown.Model,xlevels=list(IceCream=c(8, 12), Brownies=c(8,12)))
plot(Ice.Brown.Model.Plot2)

### 1.1.3 Fully Indrect

• Not spurious, but X1 is not direct to Y (X2 is intervening completely)

### 1.1.4 Full Redundancy (Spurious models)

• when, $$r_{Y2} \approx r_{Y1}r_{12}$$
• X1 is confounded in X2
• X1 and X2 are completely redundant
I.py1 =.6 #Cor between X1 Oreos and happiness
I.py2 =.6 #Cor between X2 Cookies and happiness
I.p12 = .99 #Cor between X1 Oreos and X2 Cookies

I.Means.X1X2Y<- c(10,10,10) #set the means of X and Y variables
I.CovMatrix.X1X2Y <- matrix(c(1,I.p12,I.py1,
I.p12,1,I.py2,
I.py1,I.py2,1),3,3) # creates the covariate matrix

#build the correlated variables. Note: empirical=TRUE means make the correlation EXACTLY r.
# if we say empirical=FALSE, the correlation would be normally distributed around r
set.seed(42)
IData<-mvrnorm(n=100, mu=I.Means.X1X2Y,Sigma=I.CovMatrix.X1X2Y, empirical=TRUE)
#Covert them to a "Data.Frame", which is like SPSS data window
IData<-as.data.frame(IData)
#lets add our labels to the vectors we created

###############Model 1
summary(lm(Happiness~ Oreos, data = IData))
##
## Call:
## lm(formula = Happiness ~ Oreos, data = IData)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -2.31637 -0.39257 -0.00598  0.36991  2.45566
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  4.00000    0.81211   4.925 3.42e-06 ***
## Oreos        0.60000    0.08081   7.425 4.19e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8041 on 98 degrees of freedom
## Multiple R-squared:   0.36,  Adjusted R-squared:  0.3535
## F-statistic: 55.13 on 1 and 98 DF,  p-value: 4.193e-11
###############Model 2
summary(lm(Happiness~ Cookies, data = IData))
##
## Call:
## lm(formula = Happiness ~ Cookies, data = IData)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -2.19446 -0.37045 -0.00244  0.38964  2.37326
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  4.00000    0.81211   4.925 3.42e-06 ***
## Cookies      0.60000    0.08081   7.425 4.19e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8041 on 98 degrees of freedom
## Multiple R-squared:   0.36,  Adjusted R-squared:  0.3535
## F-statistic: 55.13 on 1 and 98 DF,  p-value: 4.193e-11
###############Model 3
summary(lm(Happiness~ Oreos+Cookies, data = IData))
##
## Call:
## lm(formula = Happiness ~ Oreos + Cookies, data = IData)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -2.25666 -0.38691  0.00358  0.38119  2.41233
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)   3.9698     0.8172   4.858 4.55e-06 ***
## Oreos         0.3015     0.5750   0.524    0.601
## Cookies       0.3015     0.5750   0.524    0.601
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8071 on 97 degrees of freedom
## Multiple R-squared:  0.3618, Adjusted R-squared:  0.3487
## F-statistic:  27.5 on 2 and 97 DF,  p-value: 3.468e-10

### 1.1.5 Suppression

• Suppression is a strange case where the IV -> DV relationship is hidden (suppressed) by another variable
• Tax cuts cause growth, tax cuts cause inflation. Tax cuts alone might look +, but add in inflation and now tax cuts could make cause the growth to look different
• Or the the suppressor variable can cause a flip in the sign of the relationship
• Here is an example where the effect was hidden:
sup.py1 =  2.5 #Covar between tax cuts and growth
sup.py2 = -5.5 #Covar between inflation and growth
sup.p12 =  4  #Covar between tax cuts and inflation

Supp.X1X2Y<- c(5,5,5) #set the means of X and Y variables
Supp.CovMatrix.X1X2Y <- matrix(c(10,sup.p12,sup.py1,
sup.p12,10,sup.py2,
sup.py1,sup.py2,10),3,3) # creates the covariate matrix

#build the correlated variables. Note: empirical=TRUE means make the correlation EXACTLY r.
# if we say empirical=FALSE, the correlation would be normally distributed around r
set.seed(42)
SuppData<-mvrnorm(n=100, mu=Supp.X1X2Y,Sigma=Supp.CovMatrix.X1X2Y, empirical=TRUE)

#Covert them to a "Data.Frame"
SuppData<-as.data.frame(SuppData)
colnames(SuppData) <- c("TaxCuts","inflation","growth")

###############Model 1
TaxCutsOnly<-(lm(growth~ TaxCuts, data = SuppData))
summary(TaxCutsOnly)
##
## Call:
## lm(formula = growth ~ TaxCuts, data = SuppData)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -5.7381 -2.0879 -0.1388  2.3052  9.6160
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  3.75000    0.57781   6.490 3.53e-09 ***
## TaxCuts      0.25000    0.09781   2.556   0.0121 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.077 on 98 degrees of freedom
## Multiple R-squared:  0.0625, Adjusted R-squared:  0.05293
## F-statistic: 6.533 on 1 and 98 DF,  p-value: 0.01212
###############Model 2
Full.Model<-lm(growth~ TaxCuts+inflation, data = SuppData)
summary(Full.Model)
##
## Call:
## lm(formula = growth ~ TaxCuts + inflation, data = SuppData)
##
## Residuals:
##    Min     1Q Median     3Q    Max
## -5.797 -1.358  0.015  1.324  6.913
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  6.07143    0.45203  13.431  < 2e-16 ***
## TaxCuts      0.55952    0.07303   7.662 1.39e-11 ***
## inflation   -0.77381    0.07303 -10.596  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.106 on 97 degrees of freedom
## Multiple R-squared:  0.5655, Adjusted R-squared:  0.5565
## F-statistic: 63.12 on 2 and 97 DF,  p-value: < 2.2e-16
• You will notice tax cuts alone, 0.25, was lower than 0.5595238 controlling for inflation.
• However, you will notice inflation,-0.7738095, has a bigger effect (and negative) effect than tax cuts 0.5595238
• So in other words, yes tax cuts work but they don’t override inflation!

### 1.1.6 Lets try to graph our predictions

• Challenge is that we need to account for 2 predictors now.
• So simple scatter plots will not work
• Also how you plot depends on your theory/experiment/story
• We need to control for inflation (or view tax slope at different levels of inflation)
• First lets view the tax slope without controlling for inflation
#plot individual effects
Tax.Model.Plot <- allEffects(TaxCutsOnly, xlevels=list(TaxCuts=seq(3, 7, 1)))
plot(Tax.Model.Plot, 'TaxCuts', ylab="Growth")

• Now lets view it controlling for inflation
#plot individual effects
Full.Model.Plot <- allEffects(Full.Model, xlevels=list(TaxCuts=seq(3, 7, 1),inflation=seq(3, 7, 1)))
plot(Full.Model.Plot, 'TaxCuts', ylab="Growth")

plot(Full.Model.Plot, 'inflation', ylab="Growth")

#plot both effects
Full.Model.Plot2<-Effect(c("TaxCuts", "inflation"), Full.Model,xlevels=list(TaxCuts=c(3, 7), inflation=c(3,7)))
plot(Full.Model.Plot2)

# 2 Estimating population R-squared

• $$\rho^2$$ is estimated by multiple $$R^2$$
• Problem: in small samples correlations are unstable (and inflated)
• Also, as we add predictors, $$R^2$$ gets inflated
• So we use an adjusted multiple $$R^2$$
• Adjusted $$R^2 = 1 - 1- R_Y^2 \frac{n-1}{n-k-1}$$

# 3 Multiple Linear Regression Assumptions

• Multicollinearity: Predictors cannot be fully (or nearly fully) redundant [check the correlations between predictors]
• Homoscedasticity of residuals to fitted values
• Normal distribution of residuals
• Absence of outliers
• Ratio of cases to predictors
• Number of cases must exceed the number of predictors
• Barely acceptable minimum: 5 cases per predictor
• Preferred minimum: 20-30 cases per predictor
• Linearity of prediction
• Independence (no auto-correlated errors)

## 3.1 How to check assumptions?

• First lets fit the model
Linear.Model<-lm(Happiness~ IceCream+Brownies, data = CorrDataT)

### 3.1.1 Lets test for Multicollinearity

• Multicollinearity can be detected when the variance on the terms you are interested in become inflated
• So we need to test the variance on the factors.
• Basic Logical step 1) get Variance on predictor (only that term in model) [Vmin]
• Basic Logical step 2) get Variance (Vmax) on predictor (all other terms in model) [Vmax]
• Basic Logical step 3) ratio: Vmax/Vmin. if the value > 4 you have a problem
• that’s alot of steps. R can do it for you basically with one line of code
• Variance inflation factors (vif)
vif(Linear.Model) # variance inflation factors 
## IceCream Brownies
## 1.041667 1.041667
vif(Linear.Model) > 4 # problem?
## IceCream Brownies
##    FALSE    FALSE

### 3.1.2 lets check normality, outlinears visually first

#To see them all at once
layout(matrix(c(1,2,3,4),2,2)) # optional 4 graphs/page
plot(Linear.Model)

### Residuals vs fitted and sqaure-rooted residuals vs fitted - should be Homoscedastistic (randomly distributed around 0) - There is statistical test (in R, but not SPSS)

# Evaluate homoscedasticity
# non-constant error variance test
ncvTest(Linear.Model)
## Non-constant Variance Score Test
## Variance formula: ~ fitted.values
## Chisquare = 0.8701472    Df = 1     p = 0.3509146

### 3.1.3 Q-Q plot

• should be a straight line
• you can also visualize the residuals as a histogram
# distribution of studentized residuals
sresid <- studres(Linear.Model)
hist(sresid, freq=FALSE,
main="Distribution of Studentized Residuals")
xfit<-seq(min(sresid),max(sresid),length=40)
yfit<-dnorm(xfit)
lines(xfit, yfit)

### 3.1.4 Cook’s distances

• Its possible for one or more observations to “pull” the regression line away from the trend in the rest of the data
• Cook’s Distance is one of several possible measures of influence.
• Cook’s D is the sum of the differences between the predicted value and the other predicted values in the data set, divided by the error variance times the number of parameters in the model.
• R did this automatically in plot 4 and we can see a few people with numbers (those are the outliers)
• We can also see the influence with a little R coding (set a more conservative threshold)
# Cook's D plot
# identify D values > 4/(n-k-1)
cutoff <- 4/((nrow(CorrDataT)-length(Linear.Model$coefficients)-2)) cutoff ## [1] 0.04210526 plot(Linear.Model, which=4, cook.levels=cutoff) plot(Linear.Model, which=5, cook.levels=cutoff) • Remove those with large Cooks’ Distances (if we want too) CorrDataT$CooksD<-(cooks.distance(Linear.Model))

CorrDataT_Cleaned<-subset(CorrDataT,CooksD < cutoff)
nrow(CorrDataT_Cleaned)
## [1] 94

### 3.1.5 Another method of testing for influence

• Bonferonni p-value for most extreme obs
• Notice it gives different number of outliers
• Not all tests agree
# Assessing Outliers
outlierTest(Linear.Model) # Bonferonni p-value for most extreme obs
##
## No Studentized residuals with Bonferonni p < 0.05
## Largest |rstudent|:
##     rstudent unadjusted p-value Bonferonni p
## 94 -2.839201          0.0055205      0.55205

### 3.1.6 Linearity and autocorrelation tests

• Green lines should show straight lines relative to the red dotted line
• The auto-correlation test should be non-significant (means that each residual is not correlated to the next residual)
# Evaluate Nonlinearity
# component + residual plot aka partial-residual plots
crPlots(Linear.Model)

# Test for Autocorrelated Errors
durbinWatsonTest(Linear.Model)
##  lag Autocorrelation D-W Statistic p-value
##    1     -0.01907508      1.967401   0.878
##  Alternative hypothesis: rho != 0