1 Generalized Linear Model (GLM)

  • This is a whole area in regression and we could spend a full semester on this topic. Todays goal is crash course on the basics of the most common type of GLM used, the logistics regression
  • So far you have been using a special case of the GLM, where we assume the underlying assumption is Gaussian distribution
  • Now we will expand out the examine the whole family
  • When you run a GLM, you need to state “family” and “linking”" function you will use

2 Linear regression on binomial DV

  • Lets simulate what happens when we analyze binomial DV as it were a plane old regression
  • My grandmother had a very hard time predicting gender in the 90s because clothing became big and baggy, men wore earrings and also started sporting long hair
  • Let’s simulate her prediction process Gender (DV = 0 = male, 1 =female) based on the Hair length (IV = -3 to 3) and bagginess of clothing (IV = -3 to 3) of the person she was looking at
  • The goal is to predict gender (not hair length of each gender, so t-tests are out. Also we can later add other predictors of gender)
library(ggplot2)
## Warning: package 'ggplot2' was built under R version 3.4.1
set.seed(42)
n=30
x = runif(n,-3,3) #  Hair length centered [-3 short, 3 = long]
j = runif(n,-3,3) #  Baggy clothing centered [-3 not baggy, 3 = really baggy]
z =  .8*x - .2*j         
pr = 1/(1+exp(-z))  # pass through an inv-logit function
y = rbinom(n,1,pr)  # response variable
 
 #now feed it to glm:
LogisticStudy1= data.frame(Gender=y,HairLength=x, baggy=j)

ggplot(LogisticStudy1, aes(x=HairLength, y=Gender)) + geom_point() + 
  stat_smooth(method="lm", formula=y~x, se=FALSE)+
  theme_classic()

ggplot(LogisticStudy1, aes(x=baggy, y=Gender)) + geom_point() + 
  stat_smooth(method="lm", formula=y~x, se=FALSE)+
  theme_classic()

library(stargazer)
LM.1<-lm(Gender~HairLength+baggy,data=LogisticStudy1)
stargazer(LM.1,type="html",
          column.labels = c("LM"),
          intercept.bottom = FALSE,
          single.row=FALSE, 
          notes.append = FALSE,
          header=FALSE)
Dependent variable:
Gender
LM
Constant 0.501***
(0.083)
HairLength 0.160***
(0.045)
baggy -0.064
(0.043)
Observations 30
R2 0.334
Adjusted R2 0.285
Residual Std. Error 0.421 (df = 27)
F Statistic 6.782*** (df = 2; 27)
Note: p<0.1; p<0.05; p<0.01
  • The intercept reflects the mean gender at the mean hair length
  • The slope on hair says, as hair gets longer its more likely female
  • The slope on baggy says, as clothes get baggier its more likely male (but its not significant)
  • How the residuals look?
plot(LM.1, which =1)

  • That looks really odd because the fitted (predict values) are continuous but the response is binomial
  • Lets see how well we predicted our result for each individual:
LogisticStudy1$Predicted.Value<-predict(LM.1)
summary(LogisticStudy1$Predicted.Value)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.08036 0.35206 0.59839 0.60000 0.81592 1.09583
  • Yikes the model predicted a value outside the range of possible values
  • Lets say any prediction > .5 = Female and below that is Male
  • Then we will examine a contingency table (predict results to true results)
LogisticStudy1$Predicted.Gender<-ifelse(LogisticStudy1$Predicted.Value > .5,1,0)
C.Table<-with(LogisticStudy1,
   table(Predicted.Gender, Gender))
C.Table
##                 Gender
## Predicted.Gender  0  1
##                0  9  3
##                1  3 15
  • Correct predict is 0-0 and 1-1, bad prediction are mismatches
PercentPredicted<-(C.Table[1,1]+C.Table[2,2])/sum(C.Table)*100
PercentPredicted
## [1] 80
  • So the model did good job making the prediction in this simple case, but the \(R^2\) has no meaning (what variability of gender is it explaining?)

2.0.1 Summary

  • Using linear regression on this data produced odd predictions outside of the bounded range, violation of homoscedasticity, and an \(R^2\) which makes no sense
  • Instead lets try a GLM, but first lets understand the binomial distribution and logit link function

3 Logistic Regression

  • Logistic regression is that we call the regression where we analyze binomial DV

3.1 Binomial Distribution

  • Binomial can be expressed as follows: \[p(n|N) = (\frac{N}{n})p^n(1-p^{N-n})\]
  • where \(n\) = successes in \(N\) trials, at a specific \(p\) probability
  • These change as function of the underlying probability of getting a 0 or 1
  • The plot below has N=10 people making 1 response each, with probability changing from 0 to 1 by .25
par(mfrow=c(1, 5))
for(p in seq(0, 1, len=5))
{
    x <- dbinom(0:10, size=10, p=p)
    barplot(x, names.arg=0:10, space=0)
}

  • As the number of people increases you will notice it looks normal, here is 100 responses
par(mfrow=c(1, 5))
for(p in seq(0, 1, len=5))
{
    x <- dbinom(0:100, size=100, p=p)
    barplot(x, names.arg=0:100, space=0)
}

3.2 Logit Linking Function

  • We can bound our results making our best fit line asymptotic to the boundary conditions
  • To make this work we need to switch from straight lines to sigmoid
  • Also regression want the DV to be between \(-\infty,\infty\), so have will have to transform
  • \[Logit = log\frac{p}{1-p}\]
  • Note: I will use Log for natural log to be consistant with R (but in most text you will see natural log as LN)
logit.Transform<-function(p) {log(p/(1-p)) }

plot(logit.Transform(seq(0,1,.0001)),seq(0,1,.0001),
     main="Logit Transform",ylim = c(0,1),
  xlab="Logit",ylab="Probability")

3.3 Fit the logistic regression

  • \(logit(Gender) = B_1(HairLength) +B_2(baggy) + B_0\)
  • This can be accomplished by changing the function in R to glm from lm and specifying the family as binomial(link = “logit”)
  • First lets plot and then make sense of the parameters afterwards
LR.1<-glm(Gender~HairLength+baggy,data=LogisticStudy1, family=binomial(link = "logit"))

stargazer(LR.1,type="html",
          column.labels = c("Logistic"),
          intercept.bottom = FALSE,
          single.row=FALSE, 
          notes.append = FALSE,
          header=FALSE)
Dependent variable:
Gender
Logistic
Constant 0.085
(0.477)
HairLength 0.870***
(0.321)
baggy -0.429
(0.307)
Observations 30
Log Likelihood -14.477
Akaike Inf. Crit. 34.955
Note: p<0.1; p<0.05; p<0.01
library(ggplot2)
ggplot(LogisticStudy1, aes(x=HairLength, y=Gender)) + geom_point() + 
  stat_smooth(method="glm", method.args=list(family="binomial"), se=FALSE)+
  theme_classic()

ggplot(LogisticStudy1, aes(x=baggy, y=Gender)) + geom_point() + 
  stat_smooth(method="glm", method.args=list(family="binomial"), se=FALSE)+
  theme_classic()

3.4 Plot in probabilities

  • Y-axis = Predicted probability of gender as a function of hair length
library(effects)
PredictedData<-allEffects(LR.1)
plot(PredictedData)

  • How did we get the probability from logit?

3.5 Interpret coefficients

  • Raw coefficients are transformed and this hard to makes sense of
  • So we can transform them back to something meaningful; odds or probabilities
  • Odds = success are defined as the ratio of the probability of success over the probability of failure
  • 50% chance = Odds is 1 to 1 \[Odds = e^{logit(x)}\]
L.to.O.Transform<-function(p) {exp(logit.Transform(p))}

plot((logit.Transform(seq(0,.99,.01))),L.to.O.Transform(seq(0,.99,.01)),
     main="Logit to Odds Transform",
  xlab="Logit",ylab="Odds")

- here is our regression estimates as odds

LR.1.Trans <- coef(summary(LR.1))
LR.1.Trans[, "Estimate"] <- exp(coef(LR.1))
LR.1.Trans
##             Estimate Std. Error    z value    Pr(>|z|)
## (Intercept) 1.088608  0.4766947  0.1781009 0.858643712
## HairLength  2.385786  0.3213051  2.7062390 0.006805006
## baggy       0.651300  0.3069054 -1.3971242 0.162376227
  • MDs talk in odds, but psychologists prefer probabilities \[P = \frac{Odds} {1 + Odds}\]
O.to.P.Transform<-function(p) {L.to.O.Transform(p)/(1+L.to.O.Transform(p))}

plot(L.to.O.Transform(seq(0,.94,.01)),O.to.P.Transform(seq(0,.94,.01)),
     main="Odds to Probablity Transform",
  xlab="Odds",ylab="Probablity")

  • So we can convert our regression equation to give our probabilities directly

\[p_{(Gender)} =\frac{e^{(B_1(HairLength)+B_2(baggy) + B_0})}{1+e^{(B_1(HairLength)+B_2(baggy) + B_0})}\]

  • or more simply!

\[p_{(Gender)} =\frac{1}{1+e^{-(B_1(HairLength)+B_2(baggy) + B_0})}\]

LR.1.TransP <- coef(summary(LR.1))
LR.1.TransP[, "Estimate"] <- exp(coef(LR.1))/(1+exp(coef(LR.1)))
LR.1.TransP
##              Estimate Std. Error    z value    Pr(>|z|)
## (Intercept) 0.5212122  0.4766947  0.1781009 0.858643712
## HairLength  0.7046476  0.3213051  2.7062390 0.006805006
## baggy       0.3944165  0.3069054 -1.3971242 0.162376227
  • However, you have to report the raw values that come from the logistic regression
  • This is because your units of hair length are in raw units and if you used this table you would need to figure out how to convert all your IV units
  • Thus, you would simply use the equations above when you add up your predictors
  • So just link in linear regression you would add your predictor estimates (lets say if you has nominal variables and interactions) and than convert them as above

3.6 Wait - What about my R-squared?

  • \(R^2\) has no meaning has no meaning in these models
  • They do not measure the amount of variance accounted for
  • Binominal data cannot be homoscedastic, in fact you could see there is no spread of data around the line and also the variance at each value may not be same
  • Over the years people have created different types of pseudo-\(R^2\) in which to try to make linear regression types of understanding
  • Common favorites are Cox & Snell (aka Maximum likelihood), Nagelkerke (aka Cragg and Uhler’s), McFadden, Efron’s, etc
  • You can see an easy description of each type here: http://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
  • Each of them try to capture something like our original \(R^2\), such as (A) explaining variability, (B) model improvement such as \(change in R^2\) between models, (C) and finally as measure of multiple-correlation
  • Many of these between testing between a restricted model (Null model) and your model, so first we need to examine model fitting for GLMs

3.6.1 Deviance Testing

  • In linear regression, OLS was the fitting system and it worked because OLS can find mean and variance of the normal distribution, but now we are not working with normal distributions, so we need a new fitting procedure
  • In our OLS regression we had this concept: \[SS_{Resid} = SS_Y - SS_{Regression}\]
  • In other words, residual = Actual - Fitted values
  • in OLS, we can get the best fit analytically (solving an equation), but with GLM you cannot do that
  • We can calculate deviance of scores which is built on the idea of maximum likelihood
  • So we iterate a solution (since we cannot solve it without trial and error)
  • We need to find the likelihood; which is “a hypothetical probability that an event that has already occurred would yield a specific outcome” (http://mathworld.wolfram.com/Likelihood.html)
  • We will iterate through parameters until we maximize our likelihood (called maximal likelihood estimation)
  • There are different ML (or just L) functions that can be used and they can apply to most distributions
  • When the fit is perfect, \[L_{perfect} = 1\]
  • The null case (starting model) that ONLY has an intercept, this will probably yield the lowest likelihood
  • Our test model will be intercept + predictors (parameters - k) \[Likelihood ratio =\frac{L_{Simple}}{L_{Complex}}\]
  • Deviance this is the -2 x natual logLikelihood ratio \[D = 2*log(Likelihood ratio)\] AKA \[D = -2LL\] First I will show you the pseudo-\(R^2\) and than we will examine how to test between model fits

  • Null Deviance \[D_{Null} = -2[log(L_{Null}) - log(L_{Pefect})]\]

  • Model Deviance \[D_{K} = -2[log(L_{K} - log(L_{Pefect})]\]
  • this is like our SS residual from OLS

3.6.2 Pseudo-R-sqaured

  • We need those crazy Devience scores for some of our pseudo-\(R^2\) measurements
  • for example, \[R_L^2 = \frac{D_{Null} -D_{k}}{D_{Null}}\]
  • Cox and Snell is what SPSS gives and people report it often
  • Nagelkerke is an improvement (C&S) as it corrects some problems
  • but people like McFadden’s cause its easy to understand \[R_{McFadden}^2 = 1 - \frac{LL_{k}}{LL_{Null}}\]

  • We can just get them from the pscl package
  • llh = log-likelihood from the fitted model (llk above)
  • llhNull = The log-likelihood from the intercept-only restricted model
  • \(G^2 = -2(LL_{K} - LL_{NULL})\) is a one of the proposed goodness of fit measures (we will come back to this later)
  • McFadden = McFadden pseudo-\(R^2\)
  • r2ML = Cox & Snell pseudo-\(R^2\)
  • r2CU = Nagelkerke pseudo-\(R^2\)

library(pscl)
pR2(LR.1)
##         llh     llhNull          G2    McFadden        r2ML        r2CU 
## -14.4774392 -20.1903500  11.4258216   0.2829525   0.3167270   0.4281675

3.6.3 Wait why I do see Z and not t-values?

  • When testing individual predictors, you are not seeing t-tests (one sample t-tests against 0) you are looking at Wald Z-scores
  • Some argue that individual predictors should tested against a model that does not have that term (like a stepwise regression), but our programs will calculate a test-statistics based on each predictor in the model (more people ignore these and just look at change in overall model fit)

\[ Wald = \frac{B_j}{SE_{B_j}^2}\]

  • Wald follows a chi-square distribution

  • Or you can run a stepwise and test each predictor again the null model like this (order of predictors matter)

anova(LR.1, test="Chisq")
## Analysis of Deviance Table
## 
## Model: binomial, link: logit
## 
## Response: Gender
## 
## Terms added sequentially (first to last)
## 
## 
##            Df Deviance Resid. Df Resid. Dev Pr(>Chi)   
## NULL                          29     40.381            
## HairLength  1   9.0638        28     31.317 0.002607 **
## baggy       1   2.3620        27     28.955 0.124322   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

3.7 Hierarchical testing

  • Going stepwise can be difficult if you have lots of predictors
  • Since we cannot test the change in \(R^2\) we will instead test whether the deviance is significantly greater than the model without the predictor (just like above)
  • So we run likelihood ratio test between the models which tests against the chi-square distribution
LR.Model.1<-glm(Gender~baggy,data=LogisticStudy1, family=binomial(link = "logit"))
LR.Model.2<-glm(Gender~baggy+HairLength,data=LogisticStudy1, family=binomial(link = "logit"))
anova(LR.Model.1,LR.Model.2,test = "Chisq")
## Analysis of Deviance Table
## 
## Model 1: Gender ~ baggy
## Model 2: Gender ~ baggy + HairLength
##   Resid. Df Resid. Dev Df Deviance Pr(>Chi)   
## 1        28     39.639                        
## 2        27     28.955  1   10.684 0.001081 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
  • Here we see model 2 has improvement in deviance (it fits better) and thus it means hair length did help the prediction

3.8 How well am I predicting?

  • Model 1
fitted.results <- predict(LR.Model.1,newdata=LogisticStudy1,type='response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != LogisticStudy1$Gender)
print(paste('Accuracy = ',round(1-misClasificError,3)))
## [1] "Accuracy =  0.633"
  • Model 2
fitted.results <- predict(LR.Model.2,newdata=LogisticStudy1,type='response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != LogisticStudy1$Gender)
print(paste('Accuracy = ',round(1-misClasificError,3)))
## [1] "Accuracy =  0.767"
  • This is not the only way to examine accuracy
  • So we have get correct responses, misses and false alarms (remember our type I and II boxes)
  • To visual this we will examine receiver operator curves (ROC)
  • It is the relationship between correct responses and false alarms
  • We will calculate the area under the curve to get a good measure of accuracy (the closer to 1 the better)
  • Also the curve should follow the shape you see below (if it is the opposite shape you have a problem)
library(ROCR)
fitted.results <- predict(LR.Model.2,newdata=LogisticStudy1,type='response')
pr <- prediction(fitted.results, LogisticStudy1$Gender)

prf <- performance(pr, measure = "tpr", x.measure = "fpr")
plot(prf)

auc <- performance(pr, measure = "auc")
auc <- auc@y.values[[1]]
print(paste('Area under the Curve = ',round(auc,3)))
## [1] "Area under the Curve =  0.833"

3.9 Some new things to keep in mind/check in your model

  • Should I doing logistics model: Is a sigmoid the right shape?
  • Hosmer-Lemeshow Test: Are the observed proportions of events similar to the predicted probabilities of occurrence in subgroups of the data set using a chi square test.
  • Like HOV you dont want this to be significant
library(MKmisc)
HLgof.test(fit = fitted(LR.Model.2), obs = LogisticStudy1$Gender)
## $C
## 
##  Hosmer-Lemeshow C statistic
## 
## data:  fitted(LR.Model.2) and LogisticStudy1$Gender
## X-squared = 5.4838, df = 8, p-value = 0.7048
## 
## 
## $H
## 
##  Hosmer-Lemeshow H statistic
## 
## data:  fitted(LR.Model.2) and LogisticStudy1$Gender
## X-squared = 4.5686, df = 8, p-value = 0.8025
---
title: 'Generalized Linear Model'
output:
  html_document:
    code_download: yes
    fontsize: 8pt
    highlight: textmate
    number_sections: yes
    theme: flatly
    toc: yes
    toc_float:
      collapsed: no
---


```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(message = FALSE)
knitr::opts_chunk$set(fig.width=5)
knitr::opts_chunk$set(fig.height=3.75)
knitr::opts_chunk$set(fig.align='center') 
```


# Generalized Linear Model (GLM)
- **This is a whole area in regression and we could spend a full semester on this topic. Todays goal is crash course on the basics of the most common type of GLM used, the logistics regression**
- So far you have been using a special case of the GLM, where we assume the underlying assumption is Gaussian distribution
- Now we will expand out the examine the whole family
- When you run a GLM, you need to state "family" and "linking"" function you will use

## Family and Link
- Linear regression assumes that DV = $\mu$ and SD = $\sigma$ and the possible range of responses is from $-\infty,\infty$
- But GLM will allow us to examine categorical responses (2 or more responses) and you cannot satisfy the same requirements as linear regression (you cannot have $-\infty,\infty$ response range) 
- Some of the other distributions, like binomial (2 responses) or Poisson (multiple responses) can approximate normal when transformed
- So we need to "link" the mean of DV to the linear term of the model (think transform) to make it meet our regression requirements  
- Here are the most common families used in psych

Family    | Variance  | Link
----------| -------- | -------
Gaussian  | Gaussian | identity
binomial  | binomial | logit/Probit
Poisson   | Poisson  | log


# Linear regression on binomial DV
- Lets simulate what happens when we analyze binomial DV as it were a plane old regression
- My grandmother had a very hard time predicting gender in the 90s because clothing became big and baggy, men wore earrings and also started sporting long hair
- Let's simulate her prediction process Gender (DV = 0 = male, 1 =female) based on the Hair length (IV = -3 to 3) and bagginess of clothing (IV = -3 to 3)  of the person she was looking at
- The goal is to predict gender (not hair length of each gender, so t-tests are out. Also we can later add other predictors of gender)


```{r}
library(ggplot2)

set.seed(42)
n=30
x = runif(n,-3,3) #  Hair length centered [-3 short, 3 = long]
j = runif(n,-3,3) #  Baggy clothing centered [-3 not baggy, 3 = really baggy]
z =  .8*x - .2*j         
pr = 1/(1+exp(-z))  # pass through an inv-logit function
y = rbinom(n,1,pr)  # response variable
 
 #now feed it to glm:
LogisticStudy1= data.frame(Gender=y,HairLength=x, baggy=j)

ggplot(LogisticStudy1, aes(x=HairLength, y=Gender)) + geom_point() + 
  stat_smooth(method="lm", formula=y~x, se=FALSE)+
  theme_classic()
ggplot(LogisticStudy1, aes(x=baggy, y=Gender)) + geom_point() + 
  stat_smooth(method="lm", formula=y~x, se=FALSE)+
  theme_classic()

```


```{r, echo=TRUE, warning=FALSE,message=FALSE,results='asis'}
library(stargazer)
LM.1<-lm(Gender~HairLength+baggy,data=LogisticStudy1)
stargazer(LM.1,type="html",
          column.labels = c("LM"),
          intercept.bottom = FALSE,
          single.row=FALSE, 
          notes.append = FALSE,
          header=FALSE)

```

- The intercept reflects the mean gender at the mean hair length
- The slope on hair says, as hair gets longer its more likely female
- The slope on baggy says, as clothes get baggier its more likely male (but its not significant)
- How the residuals look?

```{r}
plot(LM.1, which =1)
```

- That looks really odd because the fitted (predict values) are continuous but the response is binomial
- Lets see how well we predicted our result for each individual:


```{r}
LogisticStudy1$Predicted.Value<-predict(LM.1)
summary(LogisticStudy1$Predicted.Value)
```

- Yikes the model predicted a value outside the range of possible values
- Lets say any prediction > .5 = Female and below that is Male
- Then we will examine a contingency table (predict results to true results)


```{r}
LogisticStudy1$Predicted.Gender<-ifelse(LogisticStudy1$Predicted.Value > .5,1,0)
C.Table<-with(LogisticStudy1,
   table(Predicted.Gender, Gender))
C.Table
```

- Correct predict is 0-0 and 1-1, bad prediction are mismatches

```{r}
PercentPredicted<-(C.Table[1,1]+C.Table[2,2])/sum(C.Table)*100
PercentPredicted
```

- So the model did good job making the prediction in this simple case, but the $R^2$ has no meaning (what variability of gender is it explaining?)

### Summary
- Using linear regression on this data produced odd predictions outside of the bounded range, violation of homoscedasticity, and an $R^2$ which makes no sense   
- Instead lets try a GLM, but first lets understand the binomial distribution and logit link function

# Logistic Regression
- Logistic regression is that we call the regression where we analyze binomial DV

## Binomial Distribution 
- Binomial can be expressed as follows: 
$$p(n|N) = (\frac{N}{n})p^n(1-p^{N-n})$$
- where $n$ = successes in $N$ trials, at a specific $p$ probability
- These change as function of the underlying probability of getting a 0 or 1
- The plot below has N=10 people making 1 response each, with probability changing from 0 to 1 by .25 

```{r, fig.width=7.5, fig.height=3.0}

par(mfrow=c(1, 5))
for(p in seq(0, 1, len=5))
{
    x <- dbinom(0:10, size=10, p=p)
    barplot(x, names.arg=0:10, space=0)
}

```

- As the number of people increases you will notice it looks normal, here is 100 responses

```{r, fig.width=7.5, fig.height=3.0}
par(mfrow=c(1, 5))
for(p in seq(0, 1, len=5))
{
    x <- dbinom(0:100, size=100, p=p)
    barplot(x, names.arg=0:100, space=0)
}
```

## Logit Linking Function
- We can bound our results making our best fit line asymptotic to the boundary conditions
- To make this work we need to switch from straight lines to sigmoid 
- Also regression want the DV to be between $-\infty,\infty$, so have will have to transform
- $$Logit = log\frac{p}{1-p}$$ 
- Note: I will use Log for natural log to be consistant with R (but in most text you will see natural log as LN)

```{r, fig.width=3.5, fig.height=3.0}
logit.Transform<-function(p) {log(p/(1-p)) }

plot(logit.Transform(seq(0,1,.0001)),seq(0,1,.0001),
     main="Logit Transform",ylim = c(0,1),
  xlab="Logit",ylab="Probability")

```


## Fit the logistic regression 
- $logit(Gender) = B_1(HairLength) +B_2(baggy) + B_0$
- This can be accomplished by changing the *function* in R to **glm** from **lm** and specifying the *family* as **binomial(link = "logit")**
- First lets plot and then make sense of the parameters afterwards

```{r, echo=TRUE, warning=FALSE,message=FALSE,results='asis'}
LR.1<-glm(Gender~HairLength+baggy,data=LogisticStudy1, family=binomial(link = "logit"))

stargazer(LR.1,type="html",
          column.labels = c("Logistic"),
          intercept.bottom = FALSE,
          single.row=FALSE, 
          notes.append = FALSE,
          header=FALSE)

```


```{r, fig.width=3.5, fig.height=3.0}
library(ggplot2)
ggplot(LogisticStudy1, aes(x=HairLength, y=Gender)) + geom_point() + 
  stat_smooth(method="glm", method.args=list(family="binomial"), se=FALSE)+
  theme_classic()

ggplot(LogisticStudy1, aes(x=baggy, y=Gender)) + geom_point() + 
  stat_smooth(method="glm", method.args=list(family="binomial"), se=FALSE)+
  theme_classic()

```

## Plot in probabilities 
- Y-axis = Predicted probability of gender as a function of hair length

```{r, fig.width=5.5, fig.height=3.0}
library(effects)
PredictedData<-allEffects(LR.1)
plot(PredictedData)
```

- How did we get the probability from logit?

## Interpret coefficients
- Raw coefficients are transformed and this hard to makes sense of
- So we can transform them back to something meaningful; odds or probabilities
- Odds = success are defined as the ratio of the probability of success over the probability of failure
- 50% chance = Odds is 1 to 1
 $$Odds = e^{logit(x)}$$ 

```{r, fig.width=3.5, fig.height=3.0}
L.to.O.Transform<-function(p) {exp(logit.Transform(p))}

plot((logit.Transform(seq(0,.99,.01))),L.to.O.Transform(seq(0,.99,.01)),
     main="Logit to Odds Transform",
  xlab="Logit",ylab="Odds")

```
- here is our regression estimates as odds

```{r, fig.width=3.5, fig.height=3.0}
LR.1.Trans <- coef(summary(LR.1))
LR.1.Trans[, "Estimate"] <- exp(coef(LR.1))
LR.1.Trans
```


- MDs talk in odds, but psychologists prefer probabilities
$$P = \frac{Odds} {1 + Odds}$$

```{r, fig.width=3.5, fig.height=3.0}
O.to.P.Transform<-function(p) {L.to.O.Transform(p)/(1+L.to.O.Transform(p))}

plot(L.to.O.Transform(seq(0,.94,.01)),O.to.P.Transform(seq(0,.94,.01)),
     main="Odds to Probablity Transform",
  xlab="Odds",ylab="Probablity")

```

- So we can convert our regression equation to give our probabilities directly

$$p_{(Gender)} =\frac{e^{(B_1(HairLength)+B_2(baggy) + B_0})}{1+e^{(B_1(HairLength)+B_2(baggy) + B_0})}$$

- or more simply!

$$p_{(Gender)} =\frac{1}{1+e^{-(B_1(HairLength)+B_2(baggy) + B_0})}$$



```{r, fig.width=3.5, fig.height=3.0}
LR.1.TransP <- coef(summary(LR.1))
LR.1.TransP[, "Estimate"] <- exp(coef(LR.1))/(1+exp(coef(LR.1)))
LR.1.TransP
```

- However, you have to report the raw values that come from the logistic regression
- This is because your units of hair length are in raw units and if you used this table you would need to figure out how to convert all your IV units
- Thus, you would simply use the equations above when you add up your predictors
- So just link  in linear regression you would add your predictor estimates (lets say if you has nominal variables and interactions) and than convert them as above

## Wait - What about my R-squared?
- $R^2$ has no meaning has no meaning in these models
- They do not measure the amount of **variance accounted for**
- Binominal data cannot be homoscedastic, in fact you could see there is no spread of data around the line and also the variance at each value may not be same 
- Over the years people have created different types of pseudo-$R^2$ in which to try to make linear regression types of understanding 
- Common favorites are Cox & Snell (aka Maximum likelihood), Nagelkerke (aka Cragg and Uhler's), McFadden, Efron's, etc
- You can see an easy description of each type here: http://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
- Each of them try to capture something like our original $R^2$, such as (A) explaining variability, (B) model improvement such as $change in R^2$ between models, (C) and finally as measure of multiple-correlation
- Many of these between testing between a restricted model (Null model) and your model, so first we need to examine model fitting for GLMs

### Deviance Testing 
- In  linear regression, OLS was the fitting system and it worked because OLS can find mean and variance of the normal distribution, but now we are not working with normal distributions, so we need a new fitting procedure
- In our OLS regression we had this concept: $$SS_{Resid} = SS_Y - SS_{Regression}$$
- In other words, residual = Actual - Fitted values 
- in OLS, we can get the best fit *analytically* (solving an equation), but with GLM you cannot do that
- We can calculate *deviance* of scores which is built on the idea of maximum likelihood
- So we *iterate* a solution (since we cannot solve it without trial and error)
- We need to find the likelihood; which is "a hypothetical probability that an event that has **already** occurred would yield a specific outcome" (http://mathworld.wolfram.com/Likelihood.html)
- We will iterate through parameters until we maximize our likelihood (called maximal likelihood estimation) 
- There are different ML (or just L) functions that can be used and they can apply to most distributions
- When the fit is perfect, $$L_{perfect} = 1$$
- The null case (starting model) that ONLY has an intercept, this will probably yield the lowest likelihood
- Our test model will be intercept + predictors (parameters - k)
$$Likelihood ratio =\frac{L_{Simple}}{L_{Complex}}$$
- Deviance this is the -2 x natual logLikelihood ratio   $$D = 2*log(Likelihood ratio)$$ AKA $$D = -2LL$$
First I will show you the pseudo-$R^2$ and than we will examine how to test between model fits

- Null Deviance 
$$D_{Null} = -2[log(L_{Null}) - log(L_{Pefect})]$$

- Model Deviance
$$D_{K} = -2[log(L_{K} - log(L_{Pefect})]$$
- this is like our SS residual from OLS

### Pseudo-R-sqaured
- We need those crazy Devience scores for some of our pseudo-$R^2$ measurements
- for example, 
$$R_L^2 = \frac{D_{Null} -D_{k}}{D_{Null}}$$
- Cox and Snell is what SPSS gives and people report it often
- Nagelkerke is an improvement (C&S) as it corrects some problems
- but people like McFadden's cause its easy to understand
$$R_{McFadden}^2 = 1 - \frac{LL_{k}}{LL_{Null}}$$
 
- We can just get them from the pscl package
- llh =  log-likelihood from the fitted model (llk above)
- llhNull = The log-likelihood from the intercept-only restricted model
- $G^2 = -2(LL_{K} - LL_{NULL})$ is a one of the proposed goodness of fit measures (we will come back to this later)
- McFadden = McFadden pseudo-$R^2$
- r2ML = Cox & Snell pseudo-$R^2$
- r2CU = Nagelkerke pseudo-$R^2$
```{r, echo=TRUE, warning=FALSE,message=FALSE}
library(pscl)
pR2(LR.1)

```
 
### Wait why I do see Z and not t-values?
- When testing individual predictors, you are not seeing t-tests (one sample t-tests against 0) you are looking at Wald Z-scores
- Some argue that individual predictors should tested against a model that does not have that term (like a stepwise regression), but our programs will calculate a test-statistics based on each predictor in the model (more people ignore these and just look at change in overall model fit) 
 
 $$ Wald = \frac{B_j}{SE_{B_j}^2}$$
 
- Wald follows a chi-square distribution

- Or you can run a stepwise and test each predictor again the null model like this (order of predictors matter)

```{r, echo=TRUE, warning=FALSE,message=FALSE}
anova(LR.1, test="Chisq")
```
 
## Hierarchical testing 
- Going stepwise can be difficult if you have lots of predictors
- Since we cannot test the change in $R^2$ we will instead test whether the deviance is significantly greater than the model without the predictor (just like above)
- So we run likelihood ratio test between the models which tests against the chi-square distribution

```{r, echo=TRUE, warning=FALSE,message=FALSE}
LR.Model.1<-glm(Gender~baggy,data=LogisticStudy1, family=binomial(link = "logit"))
LR.Model.2<-glm(Gender~baggy+HairLength,data=LogisticStudy1, family=binomial(link = "logit"))
anova(LR.Model.1,LR.Model.2,test = "Chisq")
```

- Here we see model 2 has improvement in deviance (it fits better) and thus it means hair length did help the prediction

## How well am I predicting?
- Model 1
```{r, echo=TRUE, warning=FALSE,message=FALSE}
fitted.results <- predict(LR.Model.1,newdata=LogisticStudy1,type='response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != LogisticStudy1$Gender)
print(paste('Accuracy = ',round(1-misClasificError,3)))
```
- Model 2
```{r, echo=TRUE, warning=FALSE,message=FALSE}
fitted.results <- predict(LR.Model.2,newdata=LogisticStudy1,type='response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != LogisticStudy1$Gender)
print(paste('Accuracy = ',round(1-misClasificError,3)))
```

- This is not the only way to examine accuracy
- So we have get correct responses, misses and false alarms (remember our type I and II boxes)
- To visual this we will examine receiver operator curves (ROC)
- It is the relationship between correct responses and false alarms
- We will calculate the area under the curve to get a good measure of accuracy (the closer to 1 the better)
- Also the curve should follow the shape you see below (if it is the opposite shape you have a problem)

```{r, echo=TRUE, warning=FALSE,message=FALSE}
library(ROCR)
fitted.results <- predict(LR.Model.2,newdata=LogisticStudy1,type='response')
pr <- prediction(fitted.results, LogisticStudy1$Gender)

prf <- performance(pr, measure = "tpr", x.measure = "fpr")
plot(prf)

auc <- performance(pr, measure = "auc")
auc <- auc@y.values[[1]]
print(paste('Area under the Curve = ',round(auc,3)))

```



## Some new things to keep in mind/check in your model

- Should I doing logistics model: Is a sigmoid the right shape?
- Hosmer-Lemeshow Test: Are the observed proportions of events  similar to the predicted probabilities of occurrence in subgroups of the data set using a chi square test. 
- Like HOV you dont want this to be significant 
```{r, echo=TRUE, warning=FALSE,message=FALSE}
library(MKmisc)
HLgof.test(fit = fitted(LR.Model.2), obs = LogisticStudy1$Gender)
```



<script>
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

  ga('create', 'UA-90415160-1', 'auto');
  ga('send', 'pageview');

</script>