[R] question regarding kpss tests from urca, uroot and tseries packages

markleeds at verizon.net markleeds at verizon.net
Thu Jan 10 22:05:34 CET 2008


>From: eugen pircalabelu <eugen_pircalabelu at yahoo.com>
>Date: 2008/01/10 Thu PM 02:48:32 CST
>To: R-help <r-help at stat.math.ethz.ch>
>Subject: [R] question regarding kpss tests from urca,	uroot and tseries packages

Schwert has an algorithm for deciding on the
number of lags to use in an adf test but i
can't say if you can apply the same algorithm
to kpss ? google for schwert's algorithm
and mnaybe he will mention some useful reference. 
this hypothesis testing in time series stuff 
is art rather than science
so there maybe an algorithm out there but
there is no hard and fast perfect rule
that's going to be correct all the time.

just to give you some short background
in term of adf: 

generally speaking to
decide on whether there is a unit root using adf, the lagged residuals need to
be not serially correlated under the null
hypothesis. so, when you choose a lag
basically you are deciding how many lags of
x do i use in my null model so that there
is no serial correlation in the residuals.

if you put too few lags in, then you end up
with correlated residuals and  the test
is wrong. if you put too many in, you
lose power ( the ability to reject the null )
and that's not good either.

i'm not sure if all of the above applied
to KPSS because i forget the detials of
how it's done but it's probably related.

to get some background in time series,
you should read hamilton if you're brave
or enders if you just want to get a quick
( pretty non mathematical ) idea but great 
intution. good luck. i replied privately
because others may say stuff more useful
so why not see that also. plus
i didn't answer your question.

                            mark


 
                   














>Hi R users!
>
>I've come across using kpss tests for time series analysis and i have a question that troubles me since i don't have much experience with time series and the mathematical part underlining it.
>
>x<-c(253, 252, 275, 275, 272, 254, 272, 252, 249, 300, 244, 
>258, 255, 285, 301, 278, 279, 304, 275, 276, 313, 292, 302, 
>322, 281, 298, 305, 295, 286, 327, 286, 270, 289, 293, 287, 
>267, 267, 288, 304, 273, 264, 254, 263, 265, 278)
>x <- ts(x, frequency = 12)
>library (urca)
>library (uroot)
>library (tseries) 
> 
>Now, doing an ur.kpss test (mu, lag=3) i cannot reject the null  hypothesis of  level stationarity.
>Doing a kpss.test (mu, lag=1 by default ) the p value becomes smaller than 0.05 thus rejecting the null of stationarity. Same with KPSS.test (lag=1)
>
>So, as i have noticed that increasing the number of lags on each of the tests rejecting the null becomes harder and harder. I saw that books always cite use of Bartlett window but the lags determination is left to the analyst. My question: which is the "proper" number of lags, so i don't make any false statements on the data? 
>
>Thank you and have a great day!
> 
>
>       
>---------------------------------
>
>	[[alternative HTML version deleted]]
>
>______________________________________________
>R-help at r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.




More information about the R-help mailing list