Rakudo spectest time and parrot GCs

Patrick R. Michaud pmichaud at pobox.com
Tue May 3 20:29:12 UTC 2011


On Tue, May 03, 2011 at 09:45:45PM +1000, Vasily Chekalkin wrote:
> And there is rakudo's build time excerts for "core.pm" test. All
> builds were configured with "--makefile-timing --gen-parrot".

On my system, core.pm consistently takes longer under Parrot 3.3 (gms)
than it does on Parrot 3.0 (ms2).

However, I suspect that the slowdown isn't due to the GC -- indeed,
it's very likely that gms is preventing the overall slowdown from
being far worse than it is.  Here's my results:

Version                    T1     T2     T3     T4    Fastest   vs 2011.01
--------------------------------------------------------------------------
bench-s1 -- core.pm:
  2011.01/3.0 (ms2)       254    279    278    275      254       100.0%
  2011.02/3.1 (ms2)       320    321    320    321      320       126.0%
  2011.03/3.2 (gms)       264    263    263    264      263       103.5%
  2011.04/3.3 (gms)       291    292    293    290      290       114.2%

For the above I built each of Rakudo 2011.01 through 2011.04 from
its tarball, using "perl Configure.pl --makefile-timing --gen-parrot",
then ran "make clean; make" on each one and noted the elapsed time
for the core.pm step.

The 254 value looks like an outlier, but on some earlier tests
and builds of 2011.01 I was able to consistently get core.pm times
of approximately 200s, so I'm not willing to throw it out.

All of the scripts and log files used to create the above table
are available from http://pmichaud.com/rakbench , in case anyone
wants to double-check the values or try it out.  Also, these were
all performed on a relatively lightly-loaded machine (64-bit, 4GB,
Kubuntu 11.04).

I also used a similar procedure for timing t/spec/S32-trig/sin.t
and t/spec/S05-mass/rx.t, since those are two of the longest-running
tests in Rakudo's test suite.  Here are their results:

Version                    T1     T2     T3     T4    Fastest   vs 2011.01
--------------------------------------------------------------------------
bench-s2 (sin.t):
  2011.01/3.0 (ms2)      42.2   41.1   41.3   41.1     41.1       100.0%
  2011.02/3.1 (ms2)      65.1   64.7   64.6   65.2     64.6       157.2%
  2011.03/3.2 (gms)      45.7   45.7   46.7   46.2     45.7       111.2%
  2011.04/3.3 (gms)      47.4   47.9   47.7   47.6     47.4       115.3%

bench-s3 (rx.t):
  2011.01/3.0 (ms2)     150.9  149.9  149.9  149.9    149.9       100.0%
  2011.02/3.1 (ms2)     173.1  173.2  173.7  173.2    173.1       115.8%
  2011.03/3.2 (gms)     118.2  118.2  118.2  118.4    118.2        78.9%
  2011.04/3.3 (gms)     122.1  122.2  121.3  122.6    121.3        80.9%

Here we can see that 2011.04 performs significantly worse than 2011.01
on the sin.t test, and significantly better on the rx.t test.

Our current guess is that something happened between 3.0 and 3.1
to significantly slow down Parrot, and then the gms GC managed to
claw back much (but not all) of the gains for Rakudo spectests
and build time on my machine.

I'm likely to re-run the entire suite of the above tests later tonight
on a different machine to verify these results and to establish a
baseline for future tests.  If I get significantly different results
from the above I'll report back to the list.

I'm also currently running a bench-s4 script which measures the time
needed for a full spectest; that will require about sixteen hours
to run.

Pm



More information about the parrot-dev mailing list