# 1 Multi-Level Models

• Hierarchical designs: Students nested in classrooms [Clustered data] with student-level predictors
• Multilevel designs: Students nested in classrooms with student-level and classroom-level predictors
• Problem: Clustering means each student’s scores within each classroom might be correlated because they all have the same teacher (for example)

Levels

• Level 1 = Smallest level (often subjects/students)
• Level 2 = The group/cluster the students belong too (classrooms)
• You can have high levels as well, such as Level 3 = Classrooms nested in schools

# 2 HLM with only Level 1 predictors

• Controlling for classroom differences, but making predictions about students

## 2.1 Equations (two level model)

OLS Regression

$Y_i = \beta_1X_1 +\beta_0 + \epsilon$

### 2.1.1Level 1

• Within each Group/Cluster (Students = $$i$$ within Classroom 1 = $$j$$, and we would replace $$j$$ with $$g$$ for Classroom 2 and so forth for each classroom!) $y_{ij} = B_{1j}X_{ij} + B_{0j} + r_{ij}$

• Where $$B_{0j}$$ = intercept in group j
• Where $$B_{ij}$$ = slope of students in group j
• Where $$r_{ij}$$ = residuals of each i (student) within group j

### 2.1.2Level 2

• Each Group/Cluster (Students = $$i$$ within Classroom 1 = $$j$$, Classroom 2 = $$g$$, and so forth for each classroom)

Level 2 Intercept

$B_{0j} = \gamma_{00}+u_{0j}$ - Where $$\gamma_{00}$$ = intercept of the classroom - Where $$u_{0j}$$ = random deviation of the classroom intercept from fixed population intercept - We assume students will vary randomly around the population intercept of classroom

Level 2 Slope

$B_{1j} = \gamma_{10}+u_{1j}$

• Where $$\gamma_{10}$$ = slope of the classroom
• Where $$u_{1j}$$ = random deviation of the classroom slope from fixed population slope
• We assume students will vary randomly around the population intercept of classroom
• Note: We would repeat this for each level of classroom we have (g, etc)

### 2.1.3 Mixed Equation

• We put level 1 and level 2 together $y_{ij} = (\gamma_{10}+u_{1j})X_{ij} + (\gamma_{00}+u_{0j})+ r_{ij}$

#### 2.1.3.1 Variance Components

• We now have multiple sources of error in our equation
• $$\sigma^2$$ = variance of $$r_{ij}$$, AKA variance due to error at level 1
• $$\tau_{00}$$ = variance of $$u_{0j}$$, AKA variance of random intercepts at level 2
• $$\tau_{11}$$ = variance of $$u_{1j}$$, AKA variance of random slopes at level 2
• $$\tau_{01}$$ = co-variance of $$u_{0j}$$ and $$u_{1j}$$, AKA random slopes and random intercept can correlate

## 2.2 ICC

• The degree of clustering is measured via the inter-class correlation ICC
• The proportion of total variance that is between the groups $ICC = \frac{\tau}{\tau + \sigma^2}$

• $$\tau$$ = variance in a variable due to differences between a group
• $$\sigma^2$$ = total variance across groups
• We would fit this on the null model (no level 1 predictors)
• OLS regression would assumes, ICC = 0: each classroom is unrelated to the others, but MLM/HLM does not
• If this value is large, it means that variance can be attributed to level 2

## 2.3 Example

• 1 measurement per of math score of student in 4 classrooms
• An unbalanced proportion of student in each classroom [prob = .7,.6,.4,.25], regardless of classroom, were told tell themselves they were special every day
• Click here for the file that generated the simulation to generate the HLM.
MLM.Data<-read.csv("RegressionClass/L13_MLM/HLM.csv")

### 2.3.1 Analysis

• Note: dont forget to center your variable
• When we center relative to all people and all groups, we will call this grand mean centering
library(lme4)     #mixed model package by Douglas Bates
library(ggplot2)  #GGplot package for visualizing data
### Center variable
MLM.Data$SelfAff.C <- scale(MLM.Data$SelfAff, center=TRUE, scale=FALSE)[,]
#for plots we will also label it
MLM.Data$SelfAff <- factor(MLM.Data$SelfAff,
levels=c(0,1),
labels=c("No","Yes"))

Let’s plot

theme_set(theme_bw(base_size = 12, base_family = ""))
ClassRoom.Plot <-ggplot(data = MLM.Data, aes(x = SelfAff, y=Math,group=Classroom))+
facet_grid( ~ Classroom)+
geom_point(aes(colour = Classroom))+
geom_smooth(method = "lm", se = TRUE, aes(colour = Classroom))+
xlab("Self-affirmations")+ylab("Math Score")+
theme(legend.position = "none")
ClassRoom.Plot

### 2.3.2 Modeling

Null Model

Model.Null<-lmer(Math ~1+(1|Classroom),
data=MLM.Data, REML=FALSE)
summary(Model.Null)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: Math ~ 1 + (1 | Classroom)
##    Data: MLM.Data
##
##      AIC      BIC   logLik deviance df.resid
##   2135.9   2147.9  -1065.0   2129.9      397
##
## Scaled residuals:
##     Min      1Q  Median      3Q     Max
## -2.6200 -0.6457 -0.0375  0.7427  3.9849
##
## Random effects:
##  Groups    Name        Variance Std.Dev.
##  Classroom (Intercept) 24.21    4.921
##  Residual              11.40    3.376
## Number of obs: 400, groups:  Classroom, 4
##
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)   28.517      2.466   11.56
• Next we examine the intercepts for each person
ranef(Model.Null)
## $Classroom ## (Intercept) ## C1 -6.948231 ## C2 -1.527141 ## C3 2.016522 ## C4 6.458850 • Look at ICC: The var and sd of these values are what was seen in the table above • I wrote it as a function for you to use in the future, just enter the model name ICC.Model<-function(Model.Name) { tau.Null<-as.numeric(lapply(summary(Model.Name)$varcor, diag))
sigma.Null <- as.numeric(attr(summary(Model.Name)$varcor, "sc")^2) ICC.Null <- tau.Null/(tau.Null+sigma.Null) return(ICC.Null) } • The ICC = 0.6799378 > 0, meaning we were correct to think of this as MLM/HLM problem Test Model Model.1<-lmer(Math ~ SelfAff.C+(1|Classroom), data=MLM.Data, REML=FALSE) summary(Model.1) ## Linear mixed model fit by maximum likelihood ['lmerMod'] ## Formula: Math ~ SelfAff.C + (1 | Classroom) ## Data: MLM.Data ## ## AIC BIC logLik deviance df.resid ## 2084.0 2099.9 -1038.0 2076.0 396 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.9293 -0.6462 -0.0109 0.6702 3.9940 ## ## Random effects: ## Groups Name Variance Std.Dev. ## Classroom (Intercept) 27.255 5.221 ## Residual 9.933 3.152 ## Number of obs: 400, groups: Classroom, 4 ## ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 28.5175 2.6151 10.905 ## SelfAff.C 2.4902 0.3272 7.612 ## ## Correlation of Fixed Effects: ## (Intr) ## SelfAff.C 0.000 #### 2.3.2.1 Ignored classroom random variable? • Then we are doing a t-test/simple linear regression Plot Overall.Plot <-ggplot(data = MLM.Data, aes(x = as.factor(SelfAff.C), y=Math))+ geom_boxplot()+ geom_jitter(aes(colour = Classroom))+ xlab("Self-affirmations")+ylab("Math Score")+ theme(legend.position = "none") Overall.Plot Test Model Model.LM<-lm(Math ~SelfAff.C, data=MLM.Data) summary(Model.LM) ## ## Call: ## lm(formula = Math ~ SelfAff.C, data = MLM.Data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -14.4019 -4.4070 0.6187 4.1174 13.6849 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 28.51750 0.29911 95.340 <2e-16 *** ## SelfAff.C -0.02765 0.59931 -0.046 0.963 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 5.982 on 398 degrees of freedom ## Multiple R-squared: 5.347e-06, Adjusted R-squared: -0.002507 ## F-statistic: 0.002128 on 1 and 398 DF, p-value: 0.9632 • The effect goes negative and near zero, as it seems that self-affirmations does not help kids do better on math! • This flip does not always occur (this is a special case), more often you can get the effect to look much bigger than it really is • Regardless you always needs model the levels if you know them # 3 MLM with Level 1 and Level 2 predictors • Making predictions about students (Level 1) and how known differences in classroom (level 2) can affect or even interact with Level 1 ## 3.1 Equations Level 1 remains the same as above Level 2 Intercept $B_{0j} = \gamma_{01}W_j+\gamma_{00}+u_{0j}$ • Where $$W_j$$ = predictor of level 2 (classroom) • Where $$\gamma_{10}$$ = new fixed effect on the level 2 predictor of the classroom • Where $$\gamma_{00}$$ = intercept of the classroom • Where $$u_{0j}$$ = random deviation of the classroom intercept from fixed population intercept • We assume students will vary randomly around the population intercept of classroom Level 2 Slope $B_{1j} = \gamma_{11}W_j+\gamma_{10}+u_{1j}$ • Where $$W_j$$ = predictor of level 2 (classroom) • Where $$\gamma_{11}$$ = new fixed effect on the level 2 predictor of the classroom • Where $$\gamma_{10}$$ = slope of the classroom • Where $$u_{1j}$$ = random deviation of the classroom slope from fixed population slope • We assume students will vary randomly around the population intercept of classroom • Note: We would repeat this for each level of classroom we have (g, etc) ### 3.1.1 Mixed Equation • We put level 1 and level 2 together $y_{ij} = (\gamma_{11}W_j+\gamma_{10}+u_{1j})X_{ij} + (\gamma_{01}W_j+\gamma_{00}+u_{0j})+ r_{ij}$ ## 3.2 Example • Same study as before, but there is a level 2 variable we can example • Each kid reported how supported they felt in their classroom • Lets examine the new variable ClassRoom.Support <-ggplot(data = MLM.Data, aes(x = Support, y=Math,group=Classroom))+ facet_grid(SelfAff ~ Classroom)+ geom_point(aes(colour = Classroom))+ geom_smooth(method = "lm", se = TRUE, aes(colour = Classroom))+ xlab("Support")+ylab("Math Score")+ theme(legend.position = "none") ClassRoom.Support • Problem seems that class support changes as a function of the classroom. • Basically it presents a potential problem: Might the difference in schools be the product of how supported they feel or vice versa? • So its not simply a level 1 predictor, its actually might be a level 2 predictor! • In other words if we try to use it as it (a level 1 predictor) its nested by classroom • Lets try to model it simply a level 1 and see what get Null Model (same as before) Model.0<-lmer(Math ~ SelfAff.C+(1|Classroom), data=MLM.Data, REML=FALSE) summary(Model.0) ## Linear mixed model fit by maximum likelihood ['lmerMod'] ## Formula: Math ~ SelfAff.C + (1 | Classroom) ## Data: MLM.Data ## ## AIC BIC logLik deviance df.resid ## 2084.0 2099.9 -1038.0 2076.0 396 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.9293 -0.6462 -0.0109 0.6702 3.9940 ## ## Random effects: ## Groups Name Variance Std.Dev. ## Classroom (Intercept) 27.255 5.221 ## Residual 9.933 3.152 ## Number of obs: 400, groups: Classroom, 4 ## ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 28.5175 2.6151 10.905 ## SelfAff.C 2.4902 0.3272 7.612 ## ## Correlation of Fixed Effects: ## (Intr) ## SelfAff.C 0.000 Test Model 1 Level 1 Variables ### Center variable MLM.Data$Support.C <- scale(MLM.Data$Support, center=TRUE, scale=FALSE) Model.1<-lmer(Math ~ SelfAff.C*Support.C+(1|Classroom), data=MLM.Data, REML=FALSE) summary(Model.1) ## Linear mixed model fit by maximum likelihood ['lmerMod'] ## Formula: Math ~ SelfAff.C * Support.C + (1 | Classroom) ## Data: MLM.Data ## ## AIC BIC logLik deviance df.resid ## 2061.5 2085.4 -1024.7 2049.5 394 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.7464 -0.6571 0.0162 0.6893 3.9025 ## ## Random effects: ## Groups Name Variance Std.Dev. ## Classroom (Intercept) 32.617 5.711 ## Residual 9.274 3.045 ## Number of obs: 400, groups: Classroom, 4 ## ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 28.3405 2.8599 9.910 ## SelfAff.C 2.4220 0.3170 7.640 ## Support.C 0.4919 0.1557 3.160 ## SelfAff.C:Support.C 0.8946 0.2007 4.458 ## ## Correlation of Fixed Effects: ## (Intr) SlfA.C Sppr.C ## SelfAff.C 0.000 ## Support.C -0.001 -0.074 ## SlfAf.C:S.C -0.014 -0.004 0.093 • But the problem is we know that Support changes a function of the classroom, so could it be that effect of support is not about the differences how kids feel supported but really that on average the kids in the different schools all feel supported on average differently? • So lets control the random slopes of Support at the level 2: Control for support as a function of classroom • We can do this is because each person’s slope is believed to measuring the distribution around the population slope • Basically think of each person a repeated measurement of each group Test Model 2 Level 1 Variables and level 2 Variable Model.2<-lmer(Math ~ SelfAff.C*Support.C+(1+Support.C|Classroom), data=MLM.Data, REML=FALSE) summary(Model.2) ## Linear mixed model fit by maximum likelihood ['lmerMod'] ## Formula: Math ~ SelfAff.C * Support.C + (1 + Support.C | Classroom) ## Data: MLM.Data ## ## AIC BIC logLik deviance df.resid ## 2065.2 2097.1 -1024.6 2049.2 392 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.7603 -0.6692 0.0178 0.6928 3.9138 ## ## Random effects: ## Groups Name Variance Std.Dev. Corr ## Classroom (Intercept) 32.942043 5.73952 ## Support.C 0.007699 0.08775 1.00 ## Residual 9.266165 3.04404 ## Number of obs: 400, groups: Classroom, 4 ## ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 28.4399 2.8741 9.895 ## SelfAff.C 2.4015 0.3168 7.581 ## Support.C 0.5028 0.1621 3.102 ## SelfAff.C:Support.C 0.9226 0.2006 4.600 ## ## Correlation of Fixed Effects: ## (Intr) SlfA.C Sppr.C ## SelfAff.C 0.000 ## Support.C 0.270 -0.072 ## SlfAf.C:S.C -0.014 -0.003 0.094 • Wait there is a problem with this, the correlation between support and classroom intercept is 1. • This means that the two variables are predicting the same thing • Thus our level 2 predictor is confounded with intercept • We can fix this by “centering” the level 2 variable relative to the classroom • In other words, how much deviation does each student have from the mean of the other students • If the class centered support variable predicts math score, we can be sure it isa unique predictor • Now we ask: how much to they differ from their cohort • We will call this procedure, Group Centering library(plyr) # Create a new variable in the data which is the mean of support per class room (at each subject) MLM.Data<-ddply(MLM.Data,.(Classroom), mutate, ClassSupport = mean(Support)) # next we center relative to each student relative to their class mean (how much to they differ from their cohort) MLM.Data$Support.Class.Centered<-MLM.Data$Support-MLM.Data$ClassSupport

Test Model 3 Level 1 Variables and level 2 Variable (group centered)

Model.3<-lmer(Math ~ SelfAff.C*Support.Class.Centered+(1+Support.Class.Centered|Classroom),
data=MLM.Data, REML=FALSE)
summary(Model.3)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula:
## Math ~ SelfAff.C * Support.Class.Centered + (1 + Support.Class.Centered |
##     Classroom)
##    Data: MLM.Data
##
##      AIC      BIC   logLik deviance df.resid
##   2077.7   2109.7  -1030.9   2061.7      392
##
## Scaled residuals:
##     Min      1Q  Median      3Q     Max
## -3.0079 -0.6497 -0.0203  0.6786  3.9855
##
## Random effects:
##  Groups    Name                   Variance Std.Dev. Corr
##  Classroom (Intercept)            26.7293  5.170
##            Support.Class.Centered  0.0169  0.130    0.24
##  Residual                          9.5701  3.094
## Number of obs: 400, groups:  Classroom, 4
##
## Fixed effects:
##                                  Estimate Std. Error t value
## (Intercept)                       28.4893     2.5897  11.001
## SelfAff.C                          2.4011     0.3224   7.448
## Support.Class.Centered             0.4654     0.1710   2.721
## SelfAff.C:Support.Class.Centered   0.8122     0.3197   2.541
##
## Correlation of Fixed Effects:
##             (Intr) SlfA.C Sp.C.C
## SelfAff.C    0.000
## Spprt.Cls.C  0.091 -0.069
## SlA.C:S.C.C -0.004 -0.016  0.045
• Now we see smaller effects of support, but significant and and a significant interaction
• Let’s plot it up

Plot

library(effects)
Results.Model.3<-Effect(c("SelfAff.C","Support.Class.Centered"),Model.3,
xlevels=list(SelfAff.C=c(min(MLM.Data$SelfAff.C),max(MLM.Data$SelfAff.C)),
Support.Class.Centered=c(-sd(MLM.Data$Support.Class.Centered), sd(MLM.Data$Support.Class.Centered))))
Results.Model.3<-as.data.frame(Results.Model.3)
Results.Model.3$SelfAff.C<-factor(Results.Model.3$SelfAff.C,
levels=c(min(MLM.Data$SelfAff.C),max(MLM.Data$SelfAff.C)),
labels=c("No Self-A", "Yes Self-A"))
Final.Fixed.Plot <-ggplot(data = Results.Model.3, aes(x = Support.Class.Centered, y =fit, group=SelfAff.C))+
geom_line(aes(color=SelfAff.C), size=2)+
geom_ribbon(aes(ymin=fit-se, ymax=fit+se,fill=SelfAff.C),alpha=.2)+
xlab("Support")+
ylab("Math Score")+
scale_color_manual(values=c("blue", "red"))+
scale_fill_manual(values=c("blue", "red"))+
theme_bw()+
theme(text=element_text(face="bold", size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.border = element_rect(fill = NA, colour = "NA"),
axis.line = element_line(size = 1, colour = "grey80"),
legend.title=element_blank(),
legend.position = "top")
Final.Fixed.Plot

### 3.2.1 Grand mean vs group centering

• When to center at the grand mean or the group
• This depends on your question and the data
• You may want to ask a grand mean question (how all people are impacted by the predictor regardless of group), but the data many not allow that question to be asked (if the predictor is confounded with group)
• The problem in this study is a result of the problem that we had too few classrooms, however, even in large data sets this problem can remain
• Group centering asks how does the within group variation of the predictor predict the outcome
• This is often a very interesting question and may not always differ in story from the grand mean
• However, technically its meaning has changed! So, you must think about what it means for your data to have centered within group