[R] LMER
Douglas Bates
bates at stat.wisc.edu
Fri Feb 15 21:14:59 CET 2008
On Fri, Feb 15, 2008 at 8:59 AM, Daniel Malter <daniel at umd.edu> wrote:
> Thanks for your replies. My real problem is that, for my real data, I get
> basically the same results from r2 and r3 (so to speak), but the coefficient
> estimates and significance levels for r1 are very different from those of r2
> and r3. And therefore, I do not know which of the results to trust and which
> not (if any).
For these data the development version (0.999375-4) of the lme4
package converges pretty rapidly to an estimate of zero for the
variance component.
> dat <- data.frame(Y = c(0,1,1,1,1,0,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1),
+ X = c(1,2,3,4,3,1,0,0,2,3,3,2,4,3,2,1,1,3,4,2,3),
+ Subject = gl(7, 1, len = 21, labels = letters[1:7]))
> dat
Y X Subject
1 0 1 a
2 1 2 b
3 1 3 c
4 1 4 d
5 1 3 e
6 0 1 f
7 0 0 g
8 0 0 a
9 0 2 b
10 0 3 c
11 1 3 d
12 1 2 e
13 1 4 f
14 1 3 g
15 0 2 a
16 0 1 b
17 0 1 c
18 1 3 d
19 1 4 e
20 1 2 f
21 1 3 g
> print(r1 <- lmer(Y~X+(1|Subject), dat, binomial, verbose = 1))
0: 14.071151: 0.942809 -4.97872 2.43040
1: 14.006211: 0.861564 -4.96215 2.51055
2: 13.936051: 0.779991 -5.00923 2.44399
3: 13.723943: 0.226602 -5.11306 2.46057
4: 13.699745: 0.0880156 -5.07147 2.47386
5: 13.695821: 0.00000 -4.98859 2.42895
6: 13.695469: 0.00000 -4.98540 2.43314
7: 13.695462: 0.00000 -4.98163 2.43166
8: 13.695460: 0.00000 -4.97873 2.43040
9: 13.695460: 0.00000 -4.97872 2.43040
10: 13.695460: 0.00000 -4.97872 2.43040
Generalized linear mixed model fit by the Laplace approximation
Formula: Y ~ X + (1 | Subject)
Data: dat
AIC BIC logLik deviance
19.70 22.83 -6.848 13.70
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 0 0
Number of obs: 21, groups: Subject, 7
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.979 2.315 -2.150 0.0315 *
X 2.430 1.026 2.368 0.0179 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr)
X -0.955
It is not terribly surprising if you look at a plot of the data.
There are only 3 binary responses per subject, which is not much
information per subject.
I'm not sure what PQL would give for these data but the Laplace
approximation will just revert to a generalized linear model without
any random effects.
>
> The session info follows:
>
> R version 2.6.0 (2007-10-03)
> i386-pc-mingw32
>
> locale:
> LC_COLLATE=English_United States.1252;LC_CTYPE=English_United
> States.1252;LC_MONETARY=English_United
> States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252
>
> attached base packages:
> [1] stats graphics grDevices utils datasets methods base
>
> other attached packages:
> [1] nlme_3.1-86 mgcv_1.3-29 lme4_0.99875-9 Matrix_0.999375-3
> lattice_0.16-5
>
> loaded via a namespace (and not attached):
> [1] grid_2.6.0
>
> Cheers,
> Daniel
>
>
> -------------------------
> cuncta stricte discussurus
> -------------------------
>
> -----Ursprüngliche Nachricht-----
> Von: dmbates at gmail.com [mailto:dmbates at gmail.com] Im Auftrag von Douglas
> Bates
> Gesendet: Friday, February 15, 2008 7:29 AM
> An: Daniel Malter
> Cc: r-help at stat.math.ethz.ch
> Betreff: Re: [R] LMER
>
>
>
> Could you send us the output of sessionInfo() please so we can see which
> version of the lme4 package you are using? In recent versions, especially
> the development version available as
>
> install.packages("lme4", repos = "http://r-forge.r-project.org")
>
> the PQL algorithm is no longer used. The Laplace approximation is now the
> default. The adaptive Gauss-Hermite quadrature (AGQ) approximation may be
> offered in the future.
>
> If the documentation indicates that PQL is the default then that is a
> documentation error. With the currently available implementation of the
> direct optimization of the Laplace approximation to the log-likelihood for
> the model there is no purpose in offering PQL.
>
> On Thu, Feb 14, 2008 at 6:50 PM, Daniel Malter <daniel at umd.edu> wrote:
> > Hi,
> >
> > I run the following models:
> >
> > 1a. lmer(Y~X+(1|Subject),family=binomial(link="logit")) and 1b.
> > lmer(Y~X+(1|Subject),family=binomial(link="logit"),method="PQL")
> >
> > Why does 1b produce results different from 1a? The reason why I am
> > asking is that the help states that "PQL" is the default of GLMMs
> >
> > and
> >
> > 2. gamm(Y~X,family=binomial(link="logit"),random=list(Subject=~1))
> >
> > The interesting thing about the example below is, that gamm is also
> > supposed to fit by "PQL". Interestingly, however, the GAMM fit yields
> > about the coefficient estimates of 1b. But the significance values of
> > 1a. Any insight would be greatly appreciated.
> >
> >
> > library(lme4)
> > library(mgcv)
> >
> > Y=c(0,1,1,1,1,0,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1)
> > X=c(1,2,3,4,3,1,0,0,2,3,3,2,4,3,2,1,1,3,4,2,3)
> > Subject=as.factor(c(1,2,3,4,5,6,7,1,2,3,4,5,6,7,1,2,3,4,5,6,7))
> > cbind(Y,X,Subject)
> >
> > r1=lmer(Y~X+(1|Subject),family=binomial(link="logit"))
> > summary(r1)
> >
> > r2=lmer(Y~X+(1|Subject),family=binomial(link="logit"),method="PQL")
> > summary(r2)
> >
> > r3=gamm(Y~X,family=binomial(link="logit"),random=list(Subject=~1))
> > summary(r3$gam)
> >
> >
> >
> > -------------------------
> > cuncta stricte discussurus
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
More information about the R-help
mailing list