| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
| |
Similar to #7017 and #7018
|
| |
|
|
|
|
|
|
| |
TypeUpdater which it uses internally already does so, but we must also
ignore such types earlier, and make no other modifications to them.
Helps #7015
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When EH+GC are enabled then wasm has non-nullable types, and the
sent exnref should be non-nullable. In BinaryenIR we use the non-
nullable type all the time, which we also do for function references
and other things; we lower it if GC is not enabled to a nullable type
for the binary format (see `WasmBinaryWriter::writeType`, to which
comments were added in this PR). That is, this PR makes us handle
exnref the same as those other types.
A new test verifies that behavior. Various existing tests are updated
because ReFinalize will now use the more refined type, so this is
an optimization. It is also a bugfix as in #6987 we started to emit
the refined form in the fuzzer, and this PR makes us handle it
properly in validation and ReFinalization.
|
|
|
|
|
| |
Similar to Break, BrOn, etc., we must apply subtyping constraints of the
types we send to blocks, so that Unsubtyping will not remove subtypings
that are actually needed.
|
| |
|
|
|
|
|
| |
A mutable exported global might be shared with another module which
writes to it using the current type, which is unsafe and the type system does
not allow, so do not refine there.
|
|
|
|
|
|
|
|
|
|
| |
(#7008)
When we gather strings, we create new globals for each one, that is then
the canonical defining global for it, which will then be used everywhere
else. We create such a global if we lack one, but if we happen to have such
a global - a global that simply defines a string - then we reuse it. But we
didn't handle the case where there was a use before the definition, and
failed to sort the definition before the use.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have
(drop
(block $b (result exnref)
(try_table (catch_all_ref $b)
then we don't really need to send the ref: it is dropped, so we can just replace
catch_all_ref with catch_all and then remove the drop and the block value.
MergeBlocks already had logic to remove block values, so it is the natural
place to add this.
|
|
|
|
|
|
|
|
|
|
| |
with ref.as_non_null (#7004)
(any.convert_extern/extern.convert_any (ref.as_non_null ..))
=>
(ref.as_non_null (any.convert_extern/extern.convert_any ..))
This then allows the RefAsNonNull to be combined with parents in some cases
(whereas the reverse allows nothing).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows
(block $out (result i32)
(try_table (catch..)
..
(br $out
(i32.const 42)
)
)
)
=>
(block $out (result i32)
(try_table (result i32) (catch..) ;; add a result
..
(i32.const 42) ;; remove the br around the value
)
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#6994)
In #6984 we optimized dropped blocks even if they had unreachable code. In #6988
that part was reverted, and blocks with unreachable code were ignored once more.
However, I realized that the check was not actually for unreachable code, but for
having an unreachable child, so it would miss things like this:
(block
(block
..
(br $somewhere) ;; unreachable type, but no unreachable code
)
)
But it is useful to merge such blocks: we don't need the inner block here.
To fix this, just run ReFinalize if we change anything, which will propagate
unreachability as needed. I think MergeBlocks was written before we had
that utility, so it didn't use it...
This is not only useful for itself but will unblock an EH optimization in a
later PR, that has code in this form. It also simplifies the code by removing
the hasUnreachableChild checks.
|
|
|
|
|
| |
BrOn does not always send a value. This is an odd asymmetry in the wasm
spec, where br_on_null does not send the null on the branch (which makes sense,
but the asymmetry does mean we need to special-case it).
|
|
|
|
|
|
|
|
| |
#6980 was missing the logic to reset flows after replacing a throw. The process
of replacing the throw introduces new code and in particular a drop, which
blocks branches from flowing to their targets.
In the testcase here, the br was turned into nop before this fix.
|
|
|
|
| |
Also make Try/TryTables with type none, and not just concrete types as
before.
|
|
|
|
| |
We ignored legacy Trys in #6980, but they can also catch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When I refactored the optimizeDroppedBlock logic in #6982, I didn't move the
unreachability check with that code, which was wrong. When that function
was called from another place in #6984, the fuzzer found an issue.
Diff without whitespace is smaller. This reverts almost all the test updates
from #6984 - those changes were on blocks with unreachable children.
The change was safe on them, but in general removing a block value in the
presence of unreachable code is tricky, so it's best to avoid it.
The testcase is a little bizarre, but it's the one the fuzzer found and I can't
find a way to generate a better one (other than to reduce it, which I did).
|
|
|
|
|
|
| |
Just call optimizeDroppedBlock from visitDrop to handle that.
Followup to #6982. This optimizes the new testcase added there. Some older
tests also improve.
|
|
|
|
|
|
|
|
| |
This change is NFC on all things we previously optimized, but also makes
us optimize TryTable, BrOn, etc., by replacing hard-coded logic for Break
with generic code.
Also simplify the code there a little - we didn't really need ControlFlowWalker.
|
|
|
|
|
| |
0x7800 as FP16 is 32,768 not 32,767 as I had in the comment.
I also added another comment to explain the somewhat confusing behavior.
|
|
|
|
|
|
|
|
| |
This just moves the code out into a function. A later PR will use it in another place.
Add a test that shows the motivation for that later PR: we fail to optimize away
a block return value at the top level of a function. Fixing that will involve calling
the new function here from another place.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
E.g.
(try_table (catch_all $catch)
(throw $e)
)
=>
(try_table (catch_all $catch)
(br $catch)
)
This can then allow other passes to remove the TryTable, if no throwing
things remain.
|
|
|
|
|
|
|
| |
Support 5-segment source mappings, which add a name.
Reference:
https://github.com/tc39/source-map/blob/main/source-map-rev3.md#proposed-format
|
|
|
|
|
|
|
| |
This raises the number of locals accepted by the binary parser to the
absolute limit in the spec. A warning is now printed when writing a
binary file if the Web limit of 50,000 locals is exceeded.
Fixes #6968.
|
|
|
|
|
|
|
|
|
|
|
| |
When we precompute something, we try to avoid allocating a new copy. That's important to
avoid many allocations each time we run Precompute - otherwise, each time we see a br
we'd allocate a fresh one, and for its values. But we had a bug where we reused a RefFunc
as the value of a br without updating the type. It's actually tricky to reach a situation where
we find a RefFunc to reuse and it is different from the actual one we want, but the fuzzer
found one.
Fixes the fuzz bug reported on #6845 (but unrelated to that PR).
|
|
|
|
|
|
|
|
|
|
| |
Note: FP16 is a little different from F32/F64 since it can't represent
the full 2^16 integer range. 65504 is the max whole integer. This leads
to some slightly strange behavior when converting integers greater than
65504 since they become infinity.
Specified at
https://github.com/WebAssembly/half-precision/blob/main/proposals/half-precision/Overview.md
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Inlining was careful about nested calls like this:
(call $a
(call $b)
)
If we inlined the outer call first, we'd have
(block $inlined-code-from-a
..code..
(call $b)
)
After that, the inner call is a child of a block, not of a call. That is,
we've moved the inner call to another parent. To replace that
inner call when we inline, we'd need to update the new parent,
which would require work. To avoid that work, the pass simply
created a block in the middle:
(call $a
(block
(call $b)
)
)
Now the inner call's immediate parent will not change when we
inline the outer call.
However, it turns out that this was entirely unnecessary. We find
the calls using a post-order traversal, and we store the actions in
a vector that we traverse in order, so we only ever process things
in the optimal order of children before parents. And in that order
there is no problem: inlining the inner call first leads to
(call $a
(block $inlined-code-from-b
(..code..)
)
)
That does not affect the outer call's parent.
This PR removes the creation of the unnecessary blocks. This doesn't
improve the final output as optimizations remove the unneeded
blocks later anyhow, but it does make the code simpler and a little
faster. It also makes debugging less confusing. But this is not truly
NFC because --inlining (but not --inlining-optimizing) will actually
emit fewer blocks now (but only --inlining-optimizing is used by
default in production).
The diff on tests here is very small when ignoring whitespace. The
remaining differences are just emitting fewer obviously-unneeded
blocks. There is also one test that needed manual changes,
inlining-eh-legacy, because it tested that we do Pop fixups, but
after emitting one fewer block, those fixups were not needed. I
added a new test there with two nested calls, which does end up
needing those fixups. I also added such a test in
inlining_all-features so that we have coverage for such nested
calls (we might remove the eh-legacy file some day, and other
existing tests with nested calls that I found were more complex).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We may inline multiple times into a single function. Previously, if we did so, we
did the "fixups" such as ReFinalize and non-nullable local fixes once per such
inlining. But that is wasteful as each ReFinalize etc. scans the whole function,
and could be done after we copy all the code from all the inlinings, which is
what this PR does: it splits doInlining() into one function that inlines code and
one that does the updates after, and the update is done after all inlinings.
This turns out to be very important, a 5x speedup on two large real-world wasm
files I am looking at. The reason is that we actually inline more than once in
half the cases, and sometimes far more - in one case we inline over 1,000
times into a function! (and ReFinalized 1,000 times too many)
This is practically NFC, but it turns out that there are some tiny noticeable
differences between running ReFinalize once at the end vs. once after each
inlining. These differences are not really functional or observable in the
behavior of the code, and optimizations would remove them anyhow, but
they are noticeable in two tests here. The changes to tests are, in order:
* Different block names, just because the counter we use sees more things.
* In a testcase with unreachable code, we inline twice into a function, and
the first inlining brings in an unreachable, and ReFinalizing early will lead
to it propagating differently than if we wait to ReFinalize. (It actually leads
to another cycle of inlining in that case, as a fluke.)
|
|
|
|
|
|
|
|
|
| |
The purpose of the datacount section is to pre-declare how many data
segments there will be so that engines can allocate space for them
and not have to back patch subsequent instructions in the code section
that refer to them. Once we use IRBuilder in the binary parser, we will
have to have the data segments available by the time we parse
instructions that use them, so eagerly construct the data segments when
parsing the datacount section.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a struct.get or array.get is optimized to have a null reference
operand, its return type loses meaning since the operation will always
trap. Previously when refinalizing such expressions, we just left their
return type unchanged since there was no longer an associated struct or
array type to calculate it from. However, this could lead to a strange
setup where the stale return type was the last remaining use of some
heap type in the module. That heap type would never be emitted in the
binary, but it was still used in the IR, so type optimizations would
have to keep updating it. Our type collecting logic went out of its way
to include the return types of struct.get and array.get expressions to
account for this strange possibility, even though it otherwise collected
only types that would appear in binaries.
In principle, all of this should have applied to `call_ref` as well, but
the type collection logic did not have the necessary special case, so
there was probably a latent bug there.
Get rid of these special cases in the type collection logic and make it
impossible for the IR to use a stale type that no longer appears in the
binary by updating such stale types during finalization. One possibility
would have been to make the return types of null accessors unreachable,
but this violates the usual invariant that unreachable instructions must
either have unreachable children or be branches or `(unreachable)`.
Instead, refine the return types to be uninhabitable non-nullable
references to bottom, which is nearly as good as refining them directly
to unreachable.
We can consider refining them to `unreachable` in the future, but
another problem with that is that it would currently allow the parsers
to admit more invalid modules with arbitrary junk after null accessor
instructions.
|
|
|
|
|
|
|
|
| |
The module splitting utility has a configuration option for minimizing
new export names, but it was previously applied only to newly exported
functions. Using the new multi-split mode can produce lots of exported
tables and splitting WasmGC programs can produce lots of exported
globals, so minimizing these export names can have a big impact on code
size.
|
|
|
|
|
|
|
|
| |
The configuration for the module splitting utility previous took a set
of functions to keep in the primary module. Change it to take a list of
functions to split into the secondary module instead. This improves the
code quality in multi-split mode because it keeps stub functions
generated by previous splits from being moved into secondary modules
during later splits.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Maintain the invariant that every defined functions belongs to either
the set of kept functions or the set of split functions. Functions are
kept by default except when --keep-funcs is specified without
--split-funcs on the command line. This is mostly NFC except that it
changes the default behavior when no arguments are specified on the
command line to keep all functions.
This will simplify a follow-on PR that switches from passing the kept
functions to the module splitting utility to passing the split
functions.
|
|
|
|
|
| |
We emit a select between two objects when only two objects exist of a
particular type. However, if the field is packed, we did not handle truncating the
written values.
|
|
|
|
|
|
|
|
|
| |
Rather than analyze what module elements from the primary module a
secondary module will need, the splitting logic conservatively imports
all module elements from the primary module into the secondary module.
Run RemoveUnusedElements on the secondary module to remove any of these
imports that happen to be unnecessary. Leave a TODO mentioning the
possibility of being more selective about which module elements get
exported to reduce code size in the primary module, too.
|
|
|
|
|
|
|
| |
Add a mode that splits a module into arbitrarily many parts based on a
simple manifest file. This is currently implemented by splitting out one
module at a time in a loop, but this could change in the future if
splitting out all the modules at once would improve the quality of the
output.
|
|
|
|
|
|
|
|
|
|
|
| |
In the WebAssembly text format, strings can generally be arbitrary
bytes, but identifiers must be valid UTF-8. Check for UTF-8 validity
when parsing string-style identifiers in the lexer.
Update StringLowering to generate valid UTF-8 global names even for
strings that may not be valid UTF-8 and test that text round tripping
works correctly after StringLowering.
Fixes #6937.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Wasm-split generally assumes that calls to secondary functions made
before the secondary module has been loaded and instantiated should go
to imported placeholder functions that can be responsible for loading
the secondary module and forwarding the call to the loaded function.
That scheme makes the loading entirely transparent from the
application's point of view, which is not always a good thing. Other
schemes would make it impossible for a secondary function to be called
before the secondary module has been explicitly loaded, in which case
the placeholder functions would never be called. To improve code size
and simplify instantiation under these schemes, add a new
`--no-placeholders` option that skips adding imported placeholder
functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few heap types that are hard-coded to be considered public
and therefore allowed on module boundaries even in --closed-world mode,
specifically to support js-string-builtins. We previously considered
both open and closed (i.e. final) mutable i8 arrays to be public in this
manner, but js-string-builtins only uses the closed versions, so remove
the open versions.
This fixes a particular bug in which Unsubtyping optimized a private
array type to be equivalent to an ignorable public array type,
incorrectly changing the behavior of a cast, but it does not address the
larger problem of optimizations producing types that are equivalent to
public types. Add a TODO about that problem for now.
Fixes #6935.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The error checking we had to report an error when the input contains
block parameters was in a code path that is no longer executed under
normal circumstances. Specifically, it was part of the
`ParseModuleTypesCtx` phase of parsing, which no longer parses function
bodies. Move the error checking to the `ParseDefsCtx` phase, which does
parse function bodies.
Fixes #6929.
|
|
|
|
|
|
|
|
|
|
|
| |
#6400 fixed this case but that handled only when a `pop` is an
immediate child of the current expression, but a `pop` can be nested
deeper down.
We conservatively run the EH fixup whenever we have a `pop` and create
`block`s in the optimization. We considered using `FindAll<Pop>` to make
it precise, but we decided the quadratic time plexity was not worth it.
Fixes #6918.
|
|
|
|
|
|
|
|
|
| |
To avoid having two separate topological sort utilities in the code
base, replace remaining uses of the old DFS-based, CRTP topological sort
with the newer Kahn's algorithm implementation.
This would be NFC, except that the new topological sort produces a
different order than the old topological sort, so the output of some
passes is reordered.
|
|
|
|
|
|
|
|
| |
The pass optimizes loads and stores, so without a memory there is nothing to
do.
This only helps if the user set --low-memory-unused and also has no memory,
which is likely rare, but it's a trivial change so it seems worthwhile. In particular
this pass constructs a LocalGraph, so if we can avoid work it can be substantial.
|
|
|
|
|
| |
Update the remaining tests whose readability will be affected by the
removal of the old topological sort in #6902, no matter how small their
diffs would have been.
|
|
|
|
|
|
| |
These are the tests that would otherwise have the largest diffs when
changing the topological sort used to sort types.
signature-refining_gto.wat also cannot be automatically updated, so
there is extra benefit to making sure it has stable output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike other module elements, types are not stored on the `Module`.
Instead, they are collected by traversing the IR before printing and
binary writing. The code that collects the types tries to optimize the
order of rec groups based on the number of times each type is used. As a
result, the output order of types generally has no relation to the input
order of types. In addition, most type optimizations rewrite the types
into a single large rec group, and the order of types in that group is
essentially arbitrary. Changes to the code for counting type uses,
sorting types, or sorting rec groups can yield very large changes in the
output order of types, producing test diffs that are hard to review and
potentially harming the readability of tests by moving output types away
from the corresponding input types.
To help make test output more stable and readable, introduce a tool
option that causes the order of output types to match the order of input
types as closely as possible. It is implemented by having the parsers
record the indices of the input types on the `Module` just like they
already record the type names. The `GlobalTypeRewriter` infrastructure
used by type optimizations associates the new types with the old indices
just like it already does for names and also respects the input order
when rewriting types into a large recursion group.
By default, wasm-opt and other tools clear the recorded type indices
after parsing the module, so their default behavior is not modified by
this change.
Follow-on PRs will use the new flag in more tests, which will generate
large diffs but leave the tests in stable, more readable states that
will no longer change due to other changes to the optimizing type
sorting logic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The LocalGraph there was used for two purposes:
1. Get the list of gets and sets.
2. Get only the reachable gets and sets.
It is trivial to get all the gets and sets in a much faster way, by just walking the
code as this PR does. The downside is that we also consider unreachable gets
and sets, so unreachable code can prevent us from optimizing, but that seems
worthwhile as many passes make that assumption (and they all become
maximally effective after --dce). That is the only non-NFC part here.
Removing LocalGraph + the fixup code for unreachability makes this
significantly shorter, and also 2-3x faster.
|
|
|
|
|
| |
Now that the header includes topological sort utilities in addition to
the utility that iterates over all topological orders, it makes more
sense for it to be named topological_sort.h
|