[Rd] Speed of runif() on different Operating Systems
Duncan Murdoch
murdoch at stats.uwo.ca
Wed Aug 30 12:44:52 CEST 2006
On 8/30/2006 6:33 AM, Prof Brian Ripley wrote:
> On Wed, 30 Aug 2006, Martin Becker wrote:
>
>> Prof Brian Ripley wrote:
>>> No one else seems to have responded to this.
>>>
>>> Please see `Writing R Extensions' for how to time things in R.
>>>
>> Thank you very much for the pointer to system.time(), although I read most of
>> 'Writing R Extensions', I must have overlooked this (very useful) part.
>> Nevertheless I was aware, that Sys.time() does not measure cpu time (that's
>> why I included the information, that measuring time with Rprof() yields
>> similar results, I had better included the output of Rprof indeed), but I was
>> the only user on both (idle) dual core systems and thus expected a high
>> correlation between the differences of Sys.time() and the cpu time actually
>> used.
>
> Actually, Rprof does time elapsed time on Windows. Calling gc() first is
> important, and part of what system.time() does.
>
>>> For things like this, the fine details of how well the compiler keeps the
>>> pipelines and cache filled are important, as is the cache size and memory
>>> speed. Using
>>>
>>> system.time(for (i in 1:500) ttt <- runif(1000000))
>>>
>>> your Linux time looks slow, indeed slower than the only 32-bit Linux box I
>>> have left (a 2GHz 512Kb cache Xeon) and 2.5x slower than a 64-bit R on an
>>> 2.2GHz Opteron (which is doing a lot of other work and so only giving about
>>> 30% of one of its processors to R: the elapsed time was much longer).
>>>
>>> The binary distribution of R for Windows is compiled with -O3: for some
>>> tasks it makes a lot of difference and this might just be one.
>>>
>> Thank you very much for this valuable piece of information, it explains a big
>> part of the speed difference: recompiling R on my linux box with -O3
>> significantly increases speed of runif(), now the linux box needs less than
>> 40% more time than the WinXP system.
>>> However, what can you usefully do in R with 5e8 random uniforms in anything
>>> like a minute, let alone the aggregate time this list will spend reading
>>> your question?
>> The standard method for simulating final, minimal and maximal values of
>> Brownian Motion relies on a (discrete) n-step random walk approximation, where
>> n has to be chosen very large (typically n=100 000) to keep the bias induced
>> by the approximation "small enough" for certain applications. So if you want
>> to do MC option pricing of e.g. double barrier options, 5e8 random uniforms
>> are needed for 5 000 draws of final, minimal and maximal value, which is still
>> a quite small number of draws in MC applications. I am working on a faster
>> simulation method and of course I want to compare the speed of the new and
>> (old) standard method, that's why I needed large numbers of random uniforms. I
>> thought that the particular application is not of interest for this list, so I
>> left it out in my initial submission. I apologise, if my submission was
>> off-topic in this mailing list.
>
> Isn't that usually done by adding rnorm()s and not runif()s?
>
> There are much better algorithms for simulating Brownian motion
> barrier-crossing statistics to high accuracy. It's not my field, but one
> idea is to use scaled Brownian bridge to infill time when the process is
> near a boundary.
McLeish published algorithms to simulate these directly in a recent
issue of CJS. I don't have the reference handy, but I think it's 2004
or 2005.
Duncan Murdoch
>
> Sometimes the R helpers spend a long time answering the wrong question,
> which is why it always helps to give the real one.
>
>>> If it matters to you, investigate the code your compiler creates. (The
>>> ATLAS developers report very poor performance on certain Pentiums for
>>> certain versions of gcc4.)
>>>
>>>
>> Thank you again for the useful hints!
>>
>> Regards,
>>
>> Martin Becker
>>
>>
>
More information about the R-devel
mailing list