Recent changes impeded building and testing on smaller boxes
James E Keenan
jkeen at verizon.net
Sat Sep 25 13:40:08 UTC 2010
During development of the gc_massacre branch, I tried to build and test
parrot on both Linux/i386 and Darwin/PPC. I have used each of these
machines for testing since I joined the Parrot project in late 2006.
While I have never explicitly tracked how fast 'make' and 'make test'
run on those boxes, I do have a general sense of how long those
I noticed that when I was building and testing the gc_massacre branch on
the Darwin/PPC box (a 6-year-old iBook G4 running Mac OS X 4.11), all
other operations on that machine would slow to a crawl. For example,
using Alt-Tab to toggle among programs would work only with 30-60
seconds of latency. Switching from terminal to a browser (SeaMonkey or
Camino) sometimes took 5 minutes. This slowdown was most noticeable in
the parts of 'make' where './ops2c' was being run. 'make test' ground
to a halt in 't/compilers/opsc/*.t'.
Trunk was continuing to build and test satisfactorily at this time, but
once the gc_massacre branch was merged into trunk at r49269, trunk began
to suffer all the same problems that the branch had. My unscientific
impression is that 'make' now takes 10% longer to complete (e.g., 12
minutes version 10-1/2 minutes), though it has taken as long as 19
minutes. But what is most striking is that 'make test' won't complete
at all unless I manually delete the tests in 't/compilers/opsc/*.t' from
the roster of tests. The '01' test in that directory will complete ...
and then I will get no output whatsoever from the '02' test. If I
Ctrl-C to kill the '02' test, the '03' test will run -- but other tests
in this directory will hang indefinitely as well.
Last night bacek was able to devote some attention to this problem. He
pointed to a particular line of code recently merged in from the
gc_massacre branch and asked me to make a manual change:
$ svn diff
--- src/gc/gc_ms2.c (revision 49323)
+++ src/gc/gc_ms2.c (working copy)
@@ -623,7 +623,8 @@
/* Collect every 256M allocated. */
/* Hardcode for now. Will be configured via CLI */
- self->gc_threshold = 256 * 1024 * 1024;
+/* self->gc_threshold = 256 * 1024 * 1024; */
+ self->gc_threshold = 64 * 1024 * 1024;
interp->gc_sys->gc_private = self;
256M happens to be amount of Physical Memory reported by 'top' for this
machine. Changing the gc_threshold from 256M to 64M did enable 'make
test' to run completely (including t/compilers/opsc/*.t) and PASS in
about 19 minutes. So the value of the gc_threshold does have a
significant impact on whether parrot will run (in any meaningful sense)
on a particular machine.
Now, I know that there are some who will snigger at the thought of
trying to test Parrot on a machine that is more than 6 years old. "Just
go out and buy a new laptop, kid51. It'll be sure to have at least 10
times as much memory as that ancient, battered box."
Needless to say, I don't accept that logic -- and not only because
"ancient, battered box" describes the box's owner as well as the box.
As I pointed out when voicing a similar problem with Rakudo Star, if
we've been able to build and run a particular program (Parrot, Rakudo
Perl, whatever) on a particular machine for years on end, and if we can
no longer do so, then our scientific rigor ought to require us to
determine *why* we can no longer do so, rather than sweeping the problem
under the rug. AFAICT, we have never specified minimum memory or other
requirements for building or running Parrot on any particular box.
Hence, it seems to me to be reasonable to assume that if Parrot build on
a particular box in 2006, it will continue to do so indefinitely unless
building on that box is explicitly deprecated by the project. After
all, it's quite possible that when Parrot runs on embedded devices in
the future, it will have to run in an environment with even less memory
than this ancient, battered box has.
Thank you very much.
More information about the parrot-dev