| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
(#7072)
|
|
|
|
|
| |
When we combine a load/store offset with a const, we must not
overflow, as the semantics of offsets do not wrap.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
with ref.as_non_null (#7004)
(any.convert_extern/extern.convert_any (ref.as_non_null ..))
=>
(ref.as_non_null (any.convert_extern/extern.convert_any ..))
This then allows the RefAsNonNull to be combined with parents in some cases
(whereas the reverse allows nothing).
|
|
|
|
| |
As the name of a class, uppercase seems better here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
HeapStoreOptimization (#6882)
This just moves code out of OptimizeInstructions to the new pass. The existing
test is renamed and now runs the new pass instead. The new pass is run right
after each --optimize-instructions invocation, so it should not cause any
noticeable effects whatsoever, making this NFC.
The motivation here is that there is a bug in the pass, see the new testcase
added at the end, which shows the bug. It is not practical to fix that bug in
OptimizeInstructions since we need more than peephole optimizations to do
so. This PR moves the code to a new pass so we can fix it there properly,
later.
The new pass is named HeapStoreOptimization since the same infrastructure
we will need to fix the bug will also help dead store elimination and related
things.
|
|
|
|
|
|
|
|
|
| |
Rename instructions `extern.internalize` into `any.convert_extern` and
`extern.externalize` into `extern.convert_any` to follow more closely
the spec. This was changed in
https://github.com/WebAssembly/gc/issues/432.
The legacy name is still accepted in text inputs and in the C and JS
APIs.
|
|
|
|
|
|
|
|
|
| |
(#6584)
Heap stores (struct.set) are optimized into the struct.new when they are adjacent
in a statement list.
Pushing struct.new down past irrelevant instructions increases the likelihood that
it ends up adjacent to sets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test (#6596)
This existed before #6495 but became noticeable there. We only looked at
the fallthrough values in the later part of areConsecutiveInputsEqual, but
there can be invalidation due to the non-fallthrough part:
(i32.add
(local.get $x)
(block
(local.set $x ..)
(local.get $x)
)
)
The set can cause the local.get to differ the second time. To fix this,
check if the non-fallthrough part invalidates the fallthrough (but only
on the right hand side).
Fixes #6593
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we checked late, and as a result might end up failing to optimize when
a sub-pattern could have worked. E.g.
(call
(A)
)
(call
(A)
)
The call cannot be optimized, but the A pattern repeats. Before this PR we'd
greedily focus on the entire call and then fail. After this PR we skip the call
before we commit to which patterns to try to optimize, so we succeed.
Add a isShallowlyGenerative helper here as we compute this step by step as
we go. Also remove a parameter to the generativity code (it did not use the
features it was passed).
|
|
|
|
|
|
|
|
|
| |
struct.new_with_default (#6523)
Before we preferred not to add default values, as that increases code size. But since
#6495 we turn more things into struct.new_with default, so it is important to handle
this. It seems likely that in most cases the code size downside of adding default
values is offset by avoiding a local.set later, so always do this (rather than add some
kind of heuristic).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct.new with default values can be struct.new_default.
array.new with default values can be array.new_default.
array.new of size 1 should be array.new_fixed (saves the Const).
array.new_fixed with default values can be array.new_default.
array.new_fixed with equal but non-default values can be array.new
(basically use a Const to say how many copies we want, rather than copy).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OptimizeInstructions::areConsecutiveInputsEqual() (#6481)
"Generative" is what we call something like a struct.new that may be syntactically
identical to another struct.new, but each time a new value is generated. The same
is true for calls, which can do anything, including return a different value for
syntactically identical calls.
This was not a bug because the main user of isGenerative,
areConsecutiveInputsEqual(), was too weak to notice, that is, it gave up sooner,
for other reasons. This PR improves that function to do a much better check,
which makes the fix necessary to prevent regressions.
This is not terribly important for itself, but will help a later PR that will add code
that depends more heavily on areConsecutiveInputsEqual().
|
|
|
|
|
|
|
|
| |
We had an assert there that was wrong. In fact the assert is just in one of two code paths,
and an optional one: the end situation is we have an expression and a constant to add to it,
and the assert was in the case that the expression is a Const so we can do the add at
compile time (the other code path does the add at runtime). This code path is optional as
Precompute would do such compile-time addition anyhow, but it is nice to fix and leave that
path so that this pass emits fully optimal code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
E.g.
(tuple.extract 1
(tuple.make (A) (B) (C))
=>
(B)
Modify some existing tests to not be in this trivial form, so that they do not
stop testing what they should.
|
|
|
|
|
| |
Now that the WasmGC spec has settled on a way of validating non-nullable locals,
we no longer need this experimental feature that allowed nonstandard uses of
non-nullable locals.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simplify the optimization of ref.cast and ref.test in OptimizeInstructions by
moving the loop that examines fallthrough values one at a time out to a shared
function in properties.h. Also simplify ref.cast optimization by analyzing the
cast result in just one place.
In addition to simplifying the code, also make the cast optimizations more
powerful by analyzing the nullability and heap type of the cast value
independently, resulting in a potentially more precise analysis of the cast
behavior. Also improve optimization power by considering fallthrough values when
optimizing the SuccessOnlyIfNonNull case.
|
|
|
|
|
| |
Remove old, experimental instructions and type encodings that will not be
shipped as part of WasmGC. Updating the encodings and text format to match the
final spec is left as future work.
|
|
|
|
|
| |
This parallels the code in RefCast. Previously we only looked at the type reaching us, but
intermediate fallthrough values can let us optimize too. In particular, we were not
optimizing (ref.test (local.tee ..)) if the tee was to a less-refined type.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
arm (#5681)
The logic says that if an if/select has an arm that returns a null type, and the if/select
goes into a cast, then we can ignore that arm in tnh mode (as it would trap, and we
are ignoring the possibility of a trap). But it is not enough to return a null type - the
null must actually flow out, rather than say a return be executed before.
One existing test needed adjustment, as it used calls for "thing with effects". But a
call can transfer control flow when EH is enabled, and this pass has -all. Rather
than mess with the features, I switched the effects to be locals.
|
|
|
|
|
|
|
|
|
|
|
|
| |
immediately (#5673)
Emit an unreachable, but guarded by a block as we do in other cases in this pass, to avoid
having unreachable code that is not fully propagated during the pass (as we only do a full
refinalize at the end). See existing comments starting with
"Make sure to emit a block with the same type as us" in the pass.
This is mostly not a problem with other casts, but ref.test returns an i32 which we have
lots of code that tries to optimize.
|
|
|
|
|
|
|
|
|
|
|
| |
Casting (ref nofunc) to (ref func) seems like it can succeed based on the rule
of "if it's a subtype, it can cast ok." But the fuzzer found a corner case where that
leads to a validation error (see testcase).
Refactor the cast evaluation logic to handle uninhabitable refs directly, and
return Unreachable for them (since the cast cannot even be reached).
Also reorder the rule checks there to always check for a non-nullable cast
of a bottom type (which always fails).
|
|
|
|
|
| |
The fuzzer found another case we were missing. I realized that we can just
check for this in replaceCurrent, at least for places that call that method,
which is the common case. So this simplifies the code while fixing a bug.
|
|
|
|
| |
(#5641)
|
|
|
|
|
|
|
| |
Trivial peephole optimization. Some work was needed in the tests as some of
them relied on that pattern for convenience, so I modified them to try to keep
them testing the same thing as much as possible (for one, struct.set.null.fallthrough,
I don't think we can actually keep testing the same, as the situation should not
be possible any more).
|
|
|
| |
This is the flip case of #5630
|
| |
|
|
|
|
|
|
|
|
|
| |
A cast to a non-nullable null (an impossible type) must trap.
In traps-never-happen mode, a cast that either returns a null or traps will
definitely return a null.
Followup to #5461 which emits casts to bottom types.
|
|
|
| |
Followup to #5474
|
|
|
|
|
|
|
|
|
|
|
| |
If traps can happen, then we can't always remove a trap on null
on the ref input to struct.set, since it has two children,
(struct.set
(ref.as_non_null X)
(call $foo))
Removing the ref.as would not prevent a trap, as the struct.set
will trap, but it does move the trap to after the call which is bad.
|
|
|
|
|
| |
We only checked for an unsigned overflow, which was wrong.
Fixes #5464
|
| |
|
|
|
|
|
|
| |
Optimize ref.cast instructions that must succeed by simply replacing them with
their child in the case where the child has a more refined type or by
propagating a further removed fallthrough value with a more refined type using a
tee.
|
|
|
|
|
|
|
|
|
| |
Instead of only looking at the final fallthrough value and seeing whether it is
a `RefNull` expression to determine if we are casting a null value, check the
type of each intermediate fallthrough value to see if it is a null reference.
Also improve `evaluateCastCheck` to return `Failure` instead of
`SuccessOnlyIfNonNull` if the cast value is a reference to bottom type, since
those can never be non-null.
|
|
|
|
|
|
|
|
| |
`skipCast` takes an optional parameter that bounds how general the resulting
type is allowed to be. That parameter previously had a default value of `anyref`
with the intention of allowing all casts to be skipped, but that default
inadvertently prevented any casts in the `func` or `extern` type hierarchies
from being skipped. Update `skipCast` so that the default parameter allows all
casts to be skipped in all hierarchies.
|
|
|
|
| |
We already handled Success and Failure. Also handle SuccessOnlyIfNull and
SuccessOnlyIfNonNull.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These operations are deprecated and directly representable as casts, so remove
their opcodes in the internal IR and parse them as casts instead. For now, add
logic to the printing and binary writing of RefCast to continue emitting the
legacy instructions to minimize test changes. The few test changes necessary are
because it is no longer valid to perform a ref.as_func on values outside the
func type hierarchy now that ref.as_func is subject to the ref.cast validation
rules.
RefAsExternInternalize, RefAsExternExternalize, and RefAsNonNull are left
unmodified. A future PR may remove RefAsNonNull as well, since it is also
expressible with casts.
|
|
|
| |
Add a new evaluateCastCheck() utility and use that in relevant places.
|
| |
|
|
|
|
|
|
|
|
| |
Look for definitely-failing casts along all the fallthrough values. Specifically, if any
cast in the middle proves the final cast will fail, then we know we will trap.
Fully optimize redundant casts, considering both the type and the heap type.
Combine a cast with a ref.as_non_null.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replace `RefIs` with `RefIsNull`
The other `ref.is*` instructions are deprecated and expressible in terms of
`ref.test`. Update binary and text parsing to parse those instructions as
`RefTest` expressions. Also update the printing and emitting of `RefTest`
expressions to emit the legacy instructions for now to minimize test changes and
make this a mostly non-functional change. Since `ref.is_null` is the only
`RefIs` instruction left, remove the `RefIsOp` field and rename the expression
class to `RefIsNull`.
The few test changes are due to the fact that `ref.is*` instructions are now
subject to `ref.test` validation, and in particular it is no longer valid to
perform a `ref.is_func` on a value outside of the `func` type hierarchy.
|
|
|
|
|
|
| |
Also add some comments on related optimization opportunities.
Also delete a test of a combination of types between hierarchies, which will soon
not be expressible at all in the IR.
|
|
|
|
|
|
|
|
|
|
| |
ref.as_non_null (#5398)
We were checking the heap type, but now casts care about the nullability as
well.
If the nullability is the only problem, that is, the heap type will be fine but we
might have a null, we can at least switch a ref.cast (non-null) to a
ref.as_non_null.
|
|
|
| |
This fixes an oversight in #5395
|
|
|
|
| |
(#5395)
|
|
|
|
|
|
|
|
|
| |
visitRefCast can use trapOnNonNull. To make this not regress, add fallthrough
analysis there as well.
Minor test changes are due to trapOnNonNull using getDroppedChildren which
only emits drops of necessary children. It also tells us to refinalize so it is ok for it
to change the type to unreachable.
|
|
|
|
|
|
|
|
|
|
|
| |
As noted in #4806, trying to optimize past level 0 can result in
passes emitting non-JS code, which is then unable to be converted during
final output.
This commit creates a new targetJS option in PassOptions, which can
be checked inside each pass where non-JS code might be emitted.
This commit initially adds that logic to OptimizeInstructions, where
this issue was first noticed.
|
|
|
|
|
|
|
| |
This new cast configuration was not expressible with the legacy cast
instructions. Although it is valid in Wasm, do not allow nullable casts of
non-nullable references, since those would unnecessarily lose type information.
Convert such casts to be non-nullable during expression finalization.
|
|
|
| |
This new variant of ref.test returns 1 if the input is null.
|