[R] parfm unable to fit models when hazard rate is small
Bert Gunter
bgunter@4567 @end|ng |rom gm@||@com
Wed Apr 4 18:58:06 CEST 2018
Possible hint:
1. Look at the error message.
2.
> 1/0
[1] Inf
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Apr 4, 2018 at 6:37 AM, Alexander Pate
<alexander.pate using manchester.ac.uk> wrote:
> Hello, I would like to use the parfm package: https://cran.r-project.org/web/packages/parfm/parfm.pdfhttps://cran.r-project.org/web/packages/parfm/parfm.pdf in my work. This package fits parametric frailty models to survival data. To ensure I was using it properly, I started by running some small simulations to generate some survival data (without any random effects), and analyse the data using the parfm package. I am using an exponential baseline hazard. When the baseline hazard rate drops to 0.001, I get the following error when trying to fit the model:
>
> Error in optimHess(par = ESTIMATE, fn = Mloglikelihood, obs = obsdata, :
> non-finite finite-difference value [1]
> In addition: Warning message:
> In log(pars) : NaNs produced
>
> Has anybody else come across this issue, or could suggest why parfm struggles with low event rates? Or could someone please run my code to see if they get the same issue? Full reproducible code is presented below.
>
> Many thanks for any help,
> Alex
>
> CODE:
>
> ### Create function to generate data
> simulWeib <- function(N, lambda, rho, beta1, beta2, beta3, beta4, rateC, sigma)
> {
> # covariate --> N Bernoulli trials
> x1 <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))
>
> # Now create random effect stuff
> # Create one vector of length N, all drawn from same normal distribution
> rand.effect <- rnorm(N,0,sigma)
>
> # Weibull latent event times
> v <- runif(n=N)
> Tlat <- round((- log(v) / (lambda * exp(x1 * beta1 + rand.effect)))^(1 / rho))
> multiplier = exp(x1 * beta1 + rand.effect)
> haz=lambda * exp(x1 * beta1 + rand.effect)
>
> # censoring times
> #C <-rep(100000,N)
> C <- rexp(n=N, rate=rateC)
>
> # follow-up times and event indicators
> time <- pmin(Tlat, C)
> #status <- as.numeric(rep(1,N))
> status <- as.numeric(Tlat <= C)
>
> # data set
> data.frame(id=1:N,
> id10=ceiling(1:N/10),
> time=time,
> status=status,
> x1 = as.factor(x1),
> re=rand.effect,
> multiplier=multiplier,
> haz=haz)
> }
>
> ### The reason it doesn't work is becayse the event rate gets so small!!
> set.seed(101)
>
> # Note that although data generated is for weibull, I set rho = 1 so it reduces to an exponential hazard, with rate = lambda
> # Also note sigma = 0, so random effect is not present
>
> ## Create data and fit model, lambda = 0.1
> data0<-simulWeib(10000,lambda=0.1,rho=1,rateC=0.0000000001, beta1=0.25,beta2=0,beta3=0,beta4=0, sigma = 0)
> fit.cox0<-parfm(Surv(time,status) ~ x1, data=data0, dist="exponential")
> fit.cox0
>
> ## Create data and fit model, lambda = 0.01
> data0<-simulWeib(10000,lambda=0.01,rho=1,rateC=0.0000000001, beta1=0.25,beta2=0,beta3=0,beta4=0, sigma = 0)
> fit.cox0<-parfm(Surv(time,status) ~ x1, data=data0, dist="exponential")
> fit.cox0
>
> ## Create data and fit model, lambda = 0.001
> data0<-simulWeib(10000,lambda=0.001,rho=1,rateC=0.0000000001, beta1=0.25,beta2=0,beta3=0,beta4=0, sigma = 0)
> fit.cox0<-parfm(Surv(time,status) ~ x1, data=data0, dist="exponential")
> fit.cox0
>
> ## Create data and fit model, lambda = 0.0001
> data0<-simulWeib(10000,lambda=0.0001,rho=1,rateC=0.0000000001, beta1=0.25,beta2=0,beta3=0,beta4=0, sigma = 0)
> fit.cox0<-parfm(Surv(time,status) ~ x1, data=data0, dist="exponential")
> fit.cox0
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help
mailing list