| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
This moves code out of the existing methods so that it can be
called directly from more places. A later PR will use the new
methods.
|
| |
|
|
|
|
| |
Localizer may have added a new tee and a local, and we should apply
the tee in the right place.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A field in a struct is constant if we can see that in the entire program we
only ever write the same constant value to it. For example, imagine a
vtable type that we construct with the same funcrefs each time, then (if
we have no struct.sets, or if we did, and they had the same value), we
could replace a get with that constant value, since it cannot be anything
else:
(struct.new $T (i32.const 10) (rtt))
..no other conflicting values..
(struct.get $T 0) => (i32.const 10)
If the value is a function reference, then this may allow other passes
to turn what was a call_ref into a direct call and perhaps also get
inlined, effectively a form of devirtualization.
This only works in nominal typing, as we need to know the supertype
of each type. (It could work in theory in structural, but we'd need to do
hard work to find all the possible supertypes, and it would also
become far less effective.)
This deletes a trivial test for running -O on GC content. We have
many more tests for GC at this point, so that test is not needed, and
this PR also optimizes the code into something trivial and
uninteresting anyhow.
|
|
|
|
|
|
| |
Define a "single constant expression" consistently, as a thing that is a
compile-time constant. Move I31New out of there, and also clean up
the function it is moved into, canInitializeGlobal(), to be consistent
in validating children.
|
|
|
|
|
|
| |
We were missing StructNew and ArrayLen.
Also refactor the helper method to allow for shorter code in each
caller.
|
| |
|
| |
|
|
|
| |
The wrong variable was null checked, leading to segfaults.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new run line to every list test containing a struct type to run the test
again with nominal typing. In cases where tests do not use any struct subtyping,
this does not change the test output. In cases where struct subtyping is used, a
new check prefix is introduced to capture the difference that `(extends ...)`
clauses are emitted in nominal mode but not in equirecursive mode. There are no
other test differences.
Some tests are cleaned up along the way. Notably,
O_all-features{,-ignore-implicit-traps}.wast is consolidated to a single file.
|
|
|
|
|
| |
Add a new OptimizeForJS pass which contains rewriting rules specific to JavaScript.
LLVM usually lowers x != 0 && (x & (x - 1)) == 0 (isPowerOf2) to popcnt(x) == 1 which is ok for wasm and other targets but is quite expensive for JavaScript. In this PR we lower the popcnt pattern back to the isPowerOf2 pattern.
|
|
|
|
| |
The field name lookup was previously failing because the lookup used a fresh
struct type rather than using the intended struct type.
|
|
|
|
|
|
|
|
| |
We just cleared the list of exports, but the exportMap was still populated, so
the data was in an inconsistent state.
This fixes the case of running --extract-function multiple times on a file (which
is usually not useful, unless one of the passes you are debugging adds new
functions to the file - which some do).
|
| |
|
| |
|
| |
|
|
|
|
|
| |
DeadArgumentElimination (#4036)
To do this we need to look at tail calls and not just returns.
|
|
|
|
|
|
| |
It is ok to use the default value of a reference even if we refine the type,
as it would be a more specifically-typed null, and all nulls compare the
same. However, if the default is used then we *cannot* alter the type to
be non-nullable, as then we'd use a null where that is not allowed.
|
|
|
|
| |
Partially reverts #4025, removing the code and updates the test to show
we do the optimization.
|
|
|
|
| |
interpreter (#4023)
|
|
|
|
|
|
|
|
|
| |
The tail call spec does not include subtyping because it is based on the
upstream spec, which does not contain subtyping. However, there is no reason
that subtyping shouldn't apply to tail calls like it does for any other call or
return. Update the validator to allow subtyping and avoid a related null pointer
dereference while we're at it.
Do not run the test in with --nominal because it is buggy in that mode.
|
|
|
|
| |
(#4031)
|
|
|
| |
This is necessary when using GC and EH together, for instance.
|
|
|
|
| |
(#4027)
|
|
|
|
|
|
| |
callees (#4025)
If there is a tail call, we can't change the return type of the function, as it
must match in the functions doing a tail call of it.
|
|
|
|
|
|
| |
Corresponds to #4014 which did the same for parameter types. This sees
whether the return types actually returned from a function allow us to use
a more specific type for the function's return. If so, we update that type, as
well as calls to the function.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
The pass handled non-nullability, but another case is a tuple with nullable
values in it that is assigned non-nullable values, and in general, other stuff
that is nondefaultable (but not non-nullable). Ignore those.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given
(local $x (ref $foo))
(local.set $x ..)
(.. (local.get $x))
If we remove the local.set but not the get, then we end up with
(local $x (ref $foo))
(.. (local.get $x))
It looks like the local.get reads the initial value of a non-nullable local,
which is not allowed.
In practice, this would crash in precompute-propagate which would
try to propagate the initial value to the get. Add an assertion there with
a clear message, as until we have full validation of non-nullable locals
(and the spec for that is in flux), that pass is where bugs will end up
being noticed.
To fix this, replace the get as well. We can replace it with a null
for simplicity; it will never be used anyhow.
This also uncovered a small bug with reached not containing all
the things we reached - it was missing local.gets.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a local is say anyref, but all values assigned to it are something
more specific like funcref, then we can make the type of the local
more specific. In principle that might allow further optimizations, as
the local.gets of that local will have a more specific type that their
users can see, like this:
(import .. (func $get-funcref (result funcref)))
(func $foo (result i32)
(local $x anyref)
(local.set $x (call $get-funcref))
(ref.is_func (local.get $x))
)
=>
(func $foo (result i32)
(local $x funcref) ;; updated to a subtype of the original
(local.set $x (call $get-funcref))
(ref.is_func (local.get $x)) ;; this can now be optimized to "1"
)
A possible downside is that using more specific types may not end
up allowing optimizations but may end up increasing the size of
the binary (say, replacing lots of anyref with various specific
types that compress more poorly; also, for recursive types the LUB
may be a unique type appearing nowhere else in the wasm). We
should investigate the code size factors more later.
|
|
|
|
|
|
|
|
|
| |
Practically NFC, but it does reorder some code a little. Previously we would
find a "zero", then shrink segments, then use that zero - which might no
longer be in the table. That seems weird, so this reorders that, but there
should be no significant difference in the output.
Also reduce the factor of 100 to 1, which in practice is important on one of
the Dart GC benchmarks that has a huge number of table segments.
|
|
|
|
|
|
|
|
| |
If a function is always called with a more specific type than it is declared, we can
make the type more specific.
DeadArgumentElimination's name is becoming increasingly misleading, and should
maybe be renamed. But it is the right place for this as it already does an LTO
scan of the call graph and builds up parameter data structures etc.
|
|
|
|
|
|
|
|
|
|
|
|
| |
tryToRemoveFunctions (#4013)
tryToRemoveFunctions() will reload the wasm from binary if it fails to
optimize, and without the names section we don't have a guarantee on the
names being the same after that. And then tryToEmptyFunctions would
look for a name, and crash.
In the reverse order there is no risk, as tryToEmptyFunctions does not
reload the wasm from binary, it carefully undoes what it tried to do when
it fails.
|
|
|
|
|
|
| |
Instead of skipping to the end, move quickly towards the end. This is
sometimes more efficient (as jumping from a big factor to a factor of 1
can skip over big opportunities to remove code all at once instead of
once instruction at a time).
|
|
|
|
|
|
|
| |
For signed or unsigned extension to 64-bits after lowering from partially filled 64-bit arguments:
```rust
i64.extend_i32_u(i32.wrap_i64(x)) => x // where maxBits(x) <= 32
i64.extend_i32_s(i32.wrap_i64(x)) => x // where maxBits(x) <= 31
```
|
|
|
| |
Similar to #4005 but on OptimizeInstructions instead of RemoveUnusedBrs.
|
|
|
|
|
|
|
|
| |
removeUnneededBlocks() can force us to use a local when we
load the emitted wasm, which can't work for something nondefaultable
like an RTT.
For now, just don't run that optimization if GC is enabled. Eventually,
perhaps we'll want to enable this optimization in a different way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #3973
Loads:
f32.reinterpret_i32(i32.load(x)) => f32.load(x)
f64.reinterpret_i64(i64.load(x)) => f64.load(x)
i32.reinterpret_f32(f32.load(x)) => i32.load(x)
i64.reinterpret_f64(f64.load(x)) => i64.load(x)
Stores:
f32.store(y, f32.reinterpret_i32(x)) => i32.store(y, x)
f64.store(y, f64.reinterpret_i64(x)) => i64.store(y, x)
i32.store(y, i32.reinterpret_f32(x)) => f32.store(y, x)
i64.store(y, i64.reinterpret_f64(x)) => f64.store(y, x)
Also optimize reinterprets that are undone:
i32.reinterpret_f32(f32.reinterpret_i32(x)) => x
i64.reinterpret_f64(f64.reinterpret_i64(x)) => x
f32.reinterpret_i32(i32.reinterpret_f32(x)) => x
f64.reinterpret_i64(i64.reinterpret_f64(x)) => x
|
|
|
|
|
|
|
|
|
| |
This removes the code that did so one at a time, and instead adds it
in a way that we can do it in an exponentially growing set of functions.
On large testcases where other methods do not work, this is very
useful.
Also adjust the factor to do this 20x more often, which in practice
is very useful too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
requiring sign-extension:
```
i64(x) << 56 >> 56 ==> i64.extend8_s(x)
i64(x) << 48 >> 48 ==> i64.extend16_s(x)
i64(x) << 32 >> 32 ==> i64.extend32_s(x)
i64.extend_i32_s(i32.wrap_i64(x)) ==> i64.extend32_s(x)
```
general:
```
i32.wrap_i64(i64.extend_i32_s(x)) ==> x
i32.wrap_i64(i64.extend_i32_u(x)) ==> x
```
|
|
|
|
|
| |
The spec disallows that.
Fixes #3990
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit b68691e826a46d1b03b27c552b1f5b7f51f92665.
Instead of applying the workaround to avoid precomputing SIMD in more places,
which prevents things we could optimize before, we should probably focus on
making the workaround not needed - that is, implement full SIMD support in the
intepreter (the reason for that PR), and the other TODO in the comment here,
// Until engines implement v128.const and we have SIMD-aware optimizations
// that can break large v128.const instructions into smaller consts and
// splats, do not try to precompute v128 expressions.
|
|
|
|
|
|
| |
i32(x) < 0 ? i32(-1) : i32(1) -> x >> 31 | 1
i64(x) < 0 ? i64(-1) : i64(1) -> x >> 63 | 1
This shrinks 2 bytes.
|
|
|
|
| |
Without this fix, DCE would end up calling getHeapType() on the unreachable
input, which hits an assertion as it has no heap type.
|