| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the classification of public types propagated public
visibility only through types that had previously been collected by
`collectHeapTypes`. Since there are settings that cause
`collectHeapTypes` to collect fewer types, it was possible for public
types to be missed if they were only public because they were reached by
an uncollected types.
Ensure that all public heap types are properly classified by propagating
public visibility even through types that are not part of the collected
output.
Fixes #7103.
|
|
|
| |
See https://github.com/WebAssembly/memory64/pull/92
|
|
|
|
| |
`ModuleUtils::copyTable` was not copying the `indexType` property.
|
|
|
|
|
|
|
| |
Support 5-segment source mappings, which add a name.
Reference:
https://github.com/tc39/source-map/blob/main/source-map-rev3.md#proposed-format
|
|
|
|
|
|
|
|
| |
In preparation for using IRBuilder in the binary parser, eagerly create
Functions when parsing the function section so that they are already
created once we parse the code section. IRBuilder will require the
functions to exist when parsing calls so it can figure out what type
each call should have, even when there is a call to a function whose
body has not been parsed yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a struct.get or array.get is optimized to have a null reference
operand, its return type loses meaning since the operation will always
trap. Previously when refinalizing such expressions, we just left their
return type unchanged since there was no longer an associated struct or
array type to calculate it from. However, this could lead to a strange
setup where the stale return type was the last remaining use of some
heap type in the module. That heap type would never be emitted in the
binary, but it was still used in the IR, so type optimizations would
have to keep updating it. Our type collecting logic went out of its way
to include the return types of struct.get and array.get expressions to
account for this strange possibility, even though it otherwise collected
only types that would appear in binaries.
In principle, all of this should have applied to `call_ref` as well, but
the type collection logic did not have the necessary special case, so
there was probably a latent bug there.
Get rid of these special cases in the type collection logic and make it
impossible for the IR to use a stale type that no longer appears in the
binary by updating such stale types during finalization. One possibility
would have been to make the return types of null accessors unreachable,
but this violates the usual invariant that unreachable instructions must
either have unreachable children or be branches or `(unreachable)`.
Instead, refine the return types to be uninhabitable non-nullable
references to bottom, which is nearly as good as refining them directly
to unreachable.
We can consider refining them to `unreachable` in the future, but
another problem with that is that it would currently allow the parsers
to admit more invalid modules with arbitrary junk after null accessor
instructions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike other module elements, types are not stored on the `Module`.
Instead, they are collected by traversing the IR before printing and
binary writing. The code that collects the types tries to optimize the
order of rec groups based on the number of times each type is used. As a
result, the output order of types generally has no relation to the input
order of types. In addition, most type optimizations rewrite the types
into a single large rec group, and the order of types in that group is
essentially arbitrary. Changes to the code for counting type uses,
sorting types, or sorting rec groups can yield very large changes in the
output order of types, producing test diffs that are hard to review and
potentially harming the readability of tests by moving output types away
from the corresponding input types.
To help make test output more stable and readable, introduce a tool
option that causes the order of output types to match the order of input
types as closely as possible. It is implemented by having the parsers
record the indices of the input types on the `Module` just like they
already record the type names. The `GlobalTypeRewriter` infrastructure
used by type optimizations associates the new types with the old indices
just like it already does for names and also respects the input order
when rewriting types into a large recursion group.
By default, wasm-opt and other tools clear the recorded type indices
after parsing the module, so their default behavior is not modified by
this change.
Follow-on PRs will use the new flag in more tests, which will generate
large diffs but leave the tests in stable, more readable states that
will no longer change due to other changes to the optimizing type
sorting logic.
|
|
|
|
|
| |
Now that the header includes topological sort utilities in addition to
the utility that iterates over all topological orders, it makes more
sense for it to be named topological_sort.h
|
|
|
|
|
|
|
| |
We previous incremented the use count for a declared supertype only if
it was also a type we had never seen before. Fix the count by treating
the supertype the same as any other type used in a type definition.
Update tests accordingly, including by manually moving input types
around to better match the output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many passes need to know both the set of all used types and also the
sets of private or public types. Previously there was no API to get both
at once, so getting both required two API calls that internally
collected all the types twice.
Furthermore, there are many reasons to collect heap types, and they have
different requirements about precisely which types need to be collected.
For example, in some edge cases the IR can reference heap types that do
not need to be emitted into a binary; passes that replace all types
would need to collect these types, but the binary writer would not. The
existing APIs for collecting types did not distinguish between these use
cases, so the code conservatively collected extra types that were not
always needed.
Refactor the type collecting code to expose a new API that takes a
description of which types need to be collected and returns the
appropriate types, their use counts, and optionally whether they are
each public or private.
Keep this change non-functional by commenting on places where the code
could be cleaned up or improved rather than actually making the changes.
Follow-up PRs will implement the improvements, which will necessarily
come with test changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rec groups need to be topologically sorted for the output module to be
valid, but the specific order of rec groups also affects the module size
because types at lower indices requires fewer bytes to reference. We
previously optimized for code size when gathering types by sorting the
list of groups before doing the topological sort. This was brittle,
though, and depended on implementation details of the topological sort
to be correct.
Replace the old topological sort with use of the new
`TopologicalSort::minSort` utility, which is a more principled method of
achieving a minimal topological sort with respect to some comparator.
Also draw inspiration from ReorderGlobals and apply an exponential
factor to take the users of a rec group into account when determining
its weight.
|
|
|
|
|
|
|
|
|
| |
Previously we included supertypes, but did not increase their count.
This was done so that the output for the nominal type system, which
introduced explicitly supertypes, would more closely match the output
with the old equirecursive types system. Neither type system exists
anymore and we only support the single, standard isorecursive type
system, so we can now properly count supertypes. It turns out it doesn't
make much of a difference in the test outputs anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the pass would monomorphize a call when we were sending more
refined types than the target expects. This generalizes the pass to also consider
the case where we send a constant in a parameter.
To achieve that, this refactors the pass to explicitly define the "call context",
which is the code around the call (inputs and outputs) that may end up leading
to optimization opportunities when combined with the target function. Also
add comments about the overall design + roadmap.
The existing test is mostly unmodified, and the diff there is smaller when
ignoring whitespace. We do "regress" those tests by adding more local.set
operations, as in the refactoring that makes things a lot simpler, that is, to
handle the general case of an operand having either a refined type or be a
constant, we copy it inside the function, which works either way. This
"regression" is only in the testing version of the pass (the normal version
runs optimizations, which would remove that extra code).
This also enables the pass when GC is disabled. Previously we only handled
refined types, so only GC could benefit. Add a test for MVP content
specifically to show we operate there as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We automatically copy debuginfo in replaceCurrent(), but there are a few
places that do other operations than simple replacements. call-utils.h will
turn a call_ref with a select target into two direct calls, and we were missing
the logic to copy debuginfo from the call_ref to the calls.
To make this work, refactor out the copying logic from wasm-traversal, into
debuginfo.h, and use it in call-utils.h.
debuginfo.h itself is renamed from debug.h (as now this needs to be included
from wasm-traversal, which nearly everything does, and it turns out some files
have internal stuff like a debug() helper that ends up conflicing with the old
debug namespace).
Also rename the old copyDebugInfo function to copyDebugInfoBetweenFunctions
which is more explicit. That is also moved from the header to a cpp file because
it depends on wasm-traversal (so we'd end up with recursive headers otherwise).
That is fine, as that method is called after copying a function, which is not that
frequent. The new copyDebugInfoToReplacement (which was refactored out of
wasm-traversal) is in the header because it can be called very frequently (every
single instruction we optimize) and we want it to get inlined.
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#6520)
;;@
with nothing else (no source:line) can be used to specify that the following
expression does not have any debug info associated to it. This can be used
to stop the automatic propagation of debug info in the text parsers.
The text printer has also been updated to output this comment when needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we had passes --generate-stack-ir, --optimize-stack-ir, --print-stack-ir
that could be run like any other passes. After generating StackIR it was stashed on
the function and invalidated if we modified BinaryenIR. If it wasn't invalidated then
it was used during binary writing. This PR switches things so that we optionally
generate, optimize, and print StackIR only during binary writing. It also removes
all traces of StackIR from wasm.h - after this, StackIR is a feature of binary writing
(and printing) logic only.
This is almost NFC, but there are some minor noticeable differences:
1. We no longer print has StackIR in the text format when we see it is there. It
will not be there during normal printing, as it is only present during binary writing.
(but --print-stack-ir still works as before; as mentioned above it runs during writing).
2. --generate/optimize/print-stack-ir change from being passes to being flags that
control that behavior instead. As passes, their order on the commandline mattered,
while now it does not, and they only "globally" affect things during writing.
3. The C API changes slightly, as there is no need to pass it an option "optimize" to
the StackIR APIs. Whether we optimize is handled by --optimize-stack-ir which is
set like other optimization flags on the PassOptions object, so we don't need the
old option to those C APIs.
The main benefit here is simplifying the code, so we don't need to think about
StackIR in more places than just binary writing. That may also allow future
improvements to our usage of StackIR.
|
|
|
|
|
|
|
| |
- Only write explicit function names.
- When merging modules, the name of types, globals and tags in all
modules but the first were lost.
- Set name as explicit when copying a function with a new name.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the [typed
continuations/wasmfx proposal](https://github.com/wasmfx/specfx).
This particular PR adds support for the `cont.bind` instruction for partially
applying continuations, documented
[here](https://github.com/wasmfx/specfx/blob/main/proposals/continuations/Overview.md#instructions).
In short, these instructions are of the form `(cont.bind $ct_before $ct_after)`
where `$ct_before` and `$ct_after` are related continuation types. They must
only differ in the number of arguments, where `$ct_before` has _n_ additional
parameters as compared to `$ct_after`, for some _n_ ≥ 0. The idea is that
`(cont.bind $ct_before $ct_after)` then takes a reference to a continuation of
type `$ct_before` as well as _n_ operands and returns a (reference to a)
continuation of type `$ct_after`. Thus, the folded textual representation looks
like `(cont.bind $ct_before $ct_after arg1 ... argn c)`.
Support for the instruction is implemented in both the old and the new wat
parser.
Note that this PR does not implement validation of the new instruction.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the [typed
continuations/wasmfx proposal](https://github.com/wasmfx/specfx).
This particular PR adds support for the `cont.new` instruction for creating
continuations, documented [here(https://github.com/wasmfx/specfx/blob/main/proposals/continuations/Overview.md#instructions).
In short, these instructions are of the form `(cont.new $ct)` where `$ct` must
be a continuation type. The instruction takes a single (nullable) function
reference as its argument, which means that the folded representation of the
instruction is of the form `(cont.new $ct (foo ...))`.
Support for the instruction is implemented in both the old and the new wat
parser.
Note that this PR does not implement validation of the new instruction.
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the [typed continuations proposal](https://github.com/wasmfx/specfx).
This particular PR adds support for the `resume` instruction. The most notable missing feature is validation, which is not implemented, yet.
|
|
|
|
| |
Those fields should be copied together with all the rest of the metadata that
already is. This was just missed in the prior PR.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Type annotations on multivalue blocks (and loops, ifs, and trys) are type
indices that refer to function types in the type section. For these type
annotations, the identities of the function types does not matter. As long as
the referenced type has the correct parameters and results, it will be valid to
use.
Previously, when collecting module types, we always used the "default" function
type for multivalue control flow, i.e. we used a final function type with no
supertypes in a singleton rec group. However, in cases where the program already
contains another function type with the expected signature, using the default
type is unnecessary and bloats the type section.
Update the type collecting code to reuse existing function types for multivalue
control flow where possible rather than unconditionally adding the default
function type. Similarly, update the binary writer to use the first heap type
with the required signature when emitting annotations on multivalue control flow
structures. To make this all testable, update the printer to print the type
annotations as well, rather than just the result types. Since the parser was not
able to parse those newly emitted type annotations, update the parser as well.
|
|
|
|
| |
A later PR will add getSuperType which will mean "get the general super type -
either declared, or not".
|
|
|
|
|
| |
Probably any array of non-reference data can be allowed to be public and sent
out of the module, as it is just data. For now, however, just special case the i8
and i16 array types which are useful already for string interop.
|
|
|
|
|
|
|
|
| |
None of that code is speed-sensitive, or at least doesn't need to be inlined to be
fast. Move it to cpp for faster compile times.
This caused a cascade of necessary header fixes (i.e. after removing unneeded
header inclusions in module-utils.h, files that improperly depended on that
stopped working and needed an added include).
|
|
|
|
|
|
|
|
|
|
|
|
| |
The final versions of the br_on_cast and br_on_cast_fail instructions have two
reference type annotations: one for the input type and one for the cast target
type. In the binary format, this is represented as a flags byte followed by two
encoded heap types. Upgrade all of the tests at once to use the new versions of
the instructions and drop support for the old instructions from the text parser.
Keep support in the binary parser to avoid breaking users, though. Drop some
binary tests of deprecated instruction encodings that would be more effort to
update than they're worth.
Re-land with fixes of #5734
|
|
|
|
|
|
|
| |
This reverts commit b7b1d0df29df14634d2c680d1d2c351b624b4fbb.
See comment at the end of #5734: It turns out that dropping the old opcodes causes
problems for current users, so let's revert this for now, and later we can figure out
how best to do the update.
|
|
|
|
|
|
|
|
|
|
| |
The final versions of the br_on_cast and br_on_cast_fail instructions have two
reference type annotations: one for the input type and one for the cast target
type. In the binary format, this is represented as a flags byte followed by two
encoded heap types. Since these instructions have been in flux for a while, do
not attempt to maintain backward compatibility with older versions of the
instructions. Instead, upgrade all of the tests at once to use the new versions
of the instructions. Drop some binary tests of deprecated instruction encodings
that would be more effort to update than they're worth.
|
|
|
|
|
|
|
|
|
|
|
| |
Data/Elem (#5692)
ArrayNewSeg => ArrayNewSegData, ArrayNewSegElem
ArrayInit => ArrayInitData, ArrayInitElem
Basically we remove the opcode and use the class type to differentiate them.
This adds some code but it makes the representation simpler and more compact in
memory, and it will help with #5690
|
|
|
|
|
| |
And since the only type system left is the standard isorecursive type system,
remove `TypeSystem` and its associated APIs entirely. Delete a few tests that
only made sense under the isorecursive type system.
|
|
|
|
|
| |
These complement array.copy, which we already supported, as an initial complete
set of bulk array operations. Replace the WIP spec tests with the upstream spec
tests, lightly edited for compatibility with Binaryen.
|
|
|
|
| |
Without this we hit an assertion on trying to write the binary, on a missing
heap type.
|
|
|
|
|
|
|
|
| |
To match the standard instruction name, rename the expression class without
changing any parsing or printing behavior. A follow-on PR will take care of the
functional side of this change while keeping support for parsing the old name.
This change will allow `ArrayInit` to be used as the expression class for the
upcoming `array.init_data` and `array.init_elem` instructions.
|
|
|
|
|
|
|
|
|
| |
As well as br_on_cast_fail null. Unlike the existing br_on_cast* instructions,
these new instructions treat the cast as succeeding when the input is a null.
Update the internal representation of the cast type in `BrOn` expressions to be
a `Type` rather than a `HeapType` so it will include nullability information.
Also update and improve `RemoveUnusedBrs` to handle the new instructions
correctly and optimize in more cases.
|
|
|
|
| |
It is implemented as an import, but functionally it is a call within the
module, so it does not cause types to be public.
|
|
|
| |
This new variant of ref.test returns 1 if the input is null.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since #5347 public types are never updated by type optimizations, but the
optimization passes have not yet been updated to take that into account, so they
are all buggy under an open world assumption. In #5359 we worked around many
closed world validation errors in the fuzzer by treating --closed-world like a
feature flag and checking whether it was necessary for fuzzer input, but that
did not prevent the type optimization passes from running under an open world,
so it did not work around all the potential issues.
Work around the problem more thoroughly by not running any type optimization
passes in the fuzzer without --closed-world. Also add logic to those passes to
error out if they are run without --closed-world and update the tests
accordingly.
|
|
|
|
|
|
|
|
|
| |
The latest upstream version of ref.cast is parameterized with a target reference
type, not just a heap type, because the nullability of the result is
parameterizable. As a first step toward implementing these new, more flexible
ref.cast instructions, change the internal representation of ref.cast to use the
expression type as the cast target rather than storing a separate heap type
field. For now require that the encoded semantics match the previously allowed
semantics, though, so that none of the optimization passes need to be updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The type rewriting utility in type-updating.cpp gathers all the used heap types,
then rewrites them to newly built and possibly modified heap types. The problem
is that for the isorecursive type system, the set of "used" heap types was
overly broad because it also included unused heap types that are in a rec group
with used types. In the context of emitting a binary, it is important to treat
these types as used because failing to emit them would change the identity of
the used types, but in the context of type optimizations it is ok to treat them
as truly unused because we are changing type identities anyway.
Update the type rewriting utility to only include truly used types in the set of
output types. This causes all existing type optimizations to implicitly drop
unused types, but only if they find any other optimizations to do and actually
run the rewriter utitility. Their output will also still include unused types
that were used before their optimizations were applied.
To overcome these limitations and better match the optimizing power of nominal
mode, which never includes unused types in the output, add a new type
optimization pass that removes unused types and does nothing else and run it
near the end of the global optimization pipeline.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not optimize or modify public heap types in any way. Public heap types
include the types of imported or exported functions, tables, globals, etc. This
is important to maintain the public interface of a module and ensure it can
still link interact as intended with the outside world.
Also add validation error if we find any nontrivial public types that are not
the types of imported or exported functions. This error is meant to help the
user ensure that type optimizations are not silently inhibited. In the future,
we may want to add options to silence this error or downgrade it to a warning.
This commit only updates the type updating machinery to avoid updating public
types. It does not update any optimization passes accordingly. Since we avoid
modifying public signature types already, this is not expected to break
anything, but in the future once we have function subtyping or if we make the
error optional, we may have to update some of our optimization passes.
|
|
|
|
| |
Equirecursive is no longer standards track and its implementation is extremely
complex. Remove it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, we forgot to note the type annotation on `ArrayNewSeg` instructions, so
in small modules where these are the only annotated instructions, the type
section would be incomplete.
Second, in the interpreter we were reserving space for the array before checking
that the segment access was valid. This could cause huge allocations that threw
bad_alloc exceptions before the interpreter could get around to trapping. Fix
the problem by reserving the array after validating the arguements.
Fixes #5236.
|
|
|
|
|
|
|
|
|
| |
If a heap type only ever appears as the result of a read, we must include it in
the analysis in ModuleUtils, even though it isn't written in the binary format.
Otherwise analyses using ModuleUtils can error on not finding all types in the
list of types.
Fixes #5180
|
|
|
|
|
| |
Annotations on array.get and array.set were not being counted and the code could
generally be simplified since `count` already ignores types that don't need to
be counted.
|
|
|
|
|
|
| |
The GC spec has been updated to have heap type annotations on call_ref and
return_call_ref. To avoid breaking users, we will have a graceful, multi-step
upgrade to the annotated version of call_ref, but since return_call_ref has no
users yet, update it in a single step.
|
|
|
|
|
|
|
| |
RTTs were removed from the GC spec and if they are added back in in the future,
they will be heap types rather than value types as in our implementation.
Updating our implementation to have RTTs be heap types would have been more work
than deleting them for questionable benefit since we don't know how long it will
be before they are specced again.
|
|
|
|
| |
Otherwise when a type is only used on a global, it will be incorrectly omitted
from the output.
|
|
|
|
|
|
| |
V8 requires that supertypes come before subtypes when it parses
isorecursive (i.e. standards-track) type definitions. Since 2268f2a we are
emitting nominal types using the standard isorecursive format, so respect the
ordering requirement.
|
|
|
|
|
|
|
|
|
| |
parallel analysis (#4620)
Normally ParallelFunctionAnalysis is just an analysis, and has no effects. However, in
SignatureRefining we actually do have side effects, due to an internal limitation of the
helper code it runs. This adds a template parameter to the class so users can note that
they do modify the IR. The parameter is added in the middle as it is easier to add this
param than to add the last one (the map).
|