plobsing at gmail.com
Tue Jun 1 06:40:34 UTC 2010
On Mon, May 31, 2010 at 12:11 PM, kjstol <parrotcode at gmail.com> wrote:
> I agree with Pm. Although I haven't been very active in following
> Parrot development in recent times, ideas like these neglect the
> rationale of the initial architectural design decisions (see for many
> such ideas the blog of Dan Sugalski, still online at
> www.sidhe.org/~dan/blog). That is not to say that there is no room for
> reconsidering such design decisions, but many concepts have been well
> thought out, and, as always, there are pros and cons for each design
> decision. One such disadvantage of having many variants is just that:
> a large set of ops. That was a trade-off made at the time. Again, this
> is not to say that insights may change, but I think it is important to
> keep the history and design rationale in mind, before ripping out all
> sorts of ideas. (and also, show me the evidence! is there any evidence
> that the current situation is worse than the newly proposed solution?)
I think that Parrot has a very interesting design and the potential to
be very good. However, clever as he was, Dan was not prescient, and
his ideas can and should be ammended based on current information.
That is what this proposal is, an ammendment, not a reversal. The
information I am working off of, which may be flawed, is that
code-size, runtime memory usage, cache performance are negatively
affected by having const variant opcodes.
I am assuming you are basing your reference to Dan's blog on the post
"WWIT: All Those Opcodes" in which it is basically asserted (with much
handwaving) that opcodes have near-zero cost. While these costs may be
near zero per-op, the costs do build up when you have >1k opcodes.
Also these costs may rise if we associate opcode tables per bytecode
segment or per thread in stead of keeping them in what is effectively
a global right now (which is why a number of things are in varying
states of fail).
> just my 2c.
> On Mon, May 31, 2010 at 11:50 AM, Patrick R. Michaud <pmichaud at pobox.com> wrote:
>> On Sun, May 30, 2010 at 10:51:36PM -0700, Peter Lobsinger wrote:
>>> An idea that's been batted about a bit lately is that our op-bloat is
>>> at least partly caused by all the constant-argument variant forms all
>>> of our ops need to support.
>>> To address this, the constant_unfolding branch adds a step to IMCC
>>> instruction selection to check for non-const variants of an
>>> as-of-yet-unfound op. If found, this op is used and constant arguments
>>> are handled by assignments to temporary registers. ...
>> I may be missing an important component here, but reading the above
>> makes me want to scream "Slow down a bit, partner!"
>> In 2008 when I first proposed that we start reviewing and reducing our
>> opcode set, it was with the primary aim of regularizing the API a bit
>> and eliminating opcodes that are no longer used or operations that really
>> warrant being object methods instead of opcodes (e.g., I/O and some math
>> ops, where the opcodes really were just an opcode interface to underlying
>> method calls). The point was to present more uniform opcode API, not
>> simply to reduce the opcode set to the smallest possible number.
I was under the impression that the cost of having a large table of
ops as well as a lot of nearly-identical code implementing ops was at
least part of the push for fewer ops. The cost of these tables might
become a lot more significant if we associate optables with bytecode
segments in stead of interpreters, although sorear might have some
good partial workarounds for this (see "[RFC] Dynop Opnum Mapping").
>> The above proposal looks to me as though it is reducing the number
>> of opcodes at the expense of increasing the runtime opcode dispatches
>> and the size of the resulting bytecode. That feels very much like
>> a false optimization to me. (Again, I may be totally misreading what
>> is proposed or missing some key component in all of this.)
There are 2 somewhat linked components to this:
1) Look at this nifty thing I've made. It means we can cope with ops
missing constant variants.
2) We can (maybe should) use this to get our op count down.
I assume you object mainly to the later.
The idea (which I may not have conveyed sufficiently well), is that we
eliminate rarely-used or even nonsensical constant variants of ops.
That is, get a constant gain at the expense of a cost to rare use
>>> As a proof of concept, I have removed 30 const form find_cclass and
>>> find_not_cclass ops in r47192 with no ill effect.
>> So, if I understand correctly, the new approach takes a current PIR
>> instruction like
>> $I0 = find_cclass .CCLASS_WHITESPACE, $S2, $I3, $I4
>> and generates it as:
>> $I99 = .CCLASS_WHITESPACE
>> $I0 = find_cclass $I99, $S2, $I3, $I4
>> Is this correct? (I chose this example because it is extremely common
>> in both PGE and NQP-rx.) If so, we've replaced a single opcode and dispatch
>> in the bytecode with two, increasing runtime and memory costs.
This is exactly one of the intents. And yes, I agree that has
increased the cost of this use case. I may have chosen my example op
poorly. I've put the desired constant arg forms back in r47246.
Have I mentioned how easy ops2c.pl and now opsc make suppressing the
generation of const variant ops? It is as simple as chaning "in INT"
to "invar INT" in the op declaration. I only changed four lines. The
hundreds of line changes in the patch are all generated bootstrap
The other intent of this modification is to take on the job of our
slightly buggy constant folder which is full of hard-coded special
cases. With the -O0 IMCC dependancy on this component gone (-O0
shouldn't be doing optimization at all), it can be decoupled and
either fixed up or removed.
>> Now then, I grant that the other variants of find(_not)_cclass are quite rare
>> -- it's unlikely that any of the other operands are likely to be constants.
>> So, we _could_ leave the two common constant forms in place and pessimize
>> the other variants, which is what I suspect this patch is intended to do.
>> But (1) I don't know how many opcodes this pessimization will be useful for
>> beyond the *cclass variants, and (2) doing this feels like it makes the
>> opcode set less regular instead of more regular (which was my original
>> motivation for suggesting opcode reductions).
Yes, leaving only the common constant forms is exactly the intent.
(1) having the local pessimisation available costs almost nothing. It
is ~50 lines of code, on a branch only taken when you've failed to
find an op anyways, and took me less time to write than this email. It
allows tuning if we choose to use it.
(2) which opcode set do you want regular? the one visible to
PIR/PASM/PCT or the one in PBC? The PIR opset doesn't change at all
with this. The PBC opset is already somewhat irregular, but hidden
behind a layer of IMCC virtual ops. Pay no attention to the man behind
>> (Also, does the IMCC transformation above re-use the same set of temporary
>> registers, or does it create a new temporary register for each opcode instance?)
A valid concern. And currently the answer is that IMCC allocates
registers in ascending order based on demand. The correct way to deal
with this and other register explosion problems is to have a real
register allocator. These temporaries are short lived and very local,
so even the most naive register allocator should be able to handle
>> Anyway, if having a slightly smaller but irregular opcode set is seen as
>> being more beneficial than a regular but slightly larger set, I can agree
>> to that. Just be careful about removing constant-variant opcodes that
>> are in fact quite common at runtime, such as the find_cclass one above.
I'm mindfull of that, which is why I mentioned that I really don't
know these all that well, but they seemed bloaty.
>> It may also be worth noting that PCT currently attempts to bias towards
>> the constant-form of opcodes wherever it can, under the theory that this
>> leads to faster/smaller code. (However, I can come up with quite a few
>> common cases where such optimization turns out to be not optimal, so some
>> pieces there need to be rethought as well.)
PCT is a major Parrot user, and will likely be a factor in what ops
are considered underused.
Would you care to elaborate (either here or elsewhere) about how PCT
biases towards using constants? Aren't they more or less simply
properties of the code being compiled?
More information about the parrot-dev