| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
The function type should be printed there just like for non-imported
functions.
|
|
|
|
|
|
| |
We depend on repeated calls to walk/visit accumulating effects, so this
was a bug; if we want to clear stuff then we create a new EffectAnalyzer.
Removing that fixes the attached testcase. Also added a unit test.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to have a wasm-merge tool but removed it for a lack of use cases. Recently
use cases have been showing up in the wasm GC space and elsewhere, as people are
using more diverse toolchains together, for example a project might build some C++
code alongside some wasm GC code. Merging those wasm files together can allow
for nice optimizations like inlining and better DCE etc., so it makes sense to have a
tool for merging.
Background:
* Removal: #1969
* Requests:
* wasm-merge - why it has been deleted #2174
* Compiling and linking wat files #2276
* wasm-link? #2767
This PR is a compete rewrite of wasm-merge, not a restoration of the original
codebase. The original code was quite messy (my fault), and also, since then
we've added multi-memory and multi-table which makes things a lot simpler.
The linking semantics are as described in the "wasm-link" issue #2767 : all we do
is merge normal wasm files together and connect imports and export. That is, we
have a graph of modules and their names, and each import to a module name can
be resolved to that module. Basically, like a JS bundler would do for JS, or, in other
words, we do the same operations as JS code would do to glue wasm modules
together at runtime, but at compile time. See the README update in this PR for a
concrete example.
There are no plans to do more than that simple bundling, so this should not
really overlap with wasm-ld's use cases.
This should be fairly fast as it works in linear time on the total input code. However,
it won't be as fast as wasm-ld, of course, as it does build Binaryen IR for each
module. An advantage to working on Binaryen IR is that we can easily do some
global DCE after merging, and further optimizations are possible later.
|
|
|
|
|
|
|
|
|
|
|
| |
See WebAssembly/stringref#46.
This format is already adopted by V8: https://chromium-review.googlesource.com/c/v8/v8/+/3892695.
The text format is left unchanged (see #5607 for a discussion on the subject).
I have also added support for string.encode_lossy_utf8 and
string.encode_lossy_utf8 array (by allowing the replace policy for
Binaryen's string.encode_wtf8 instruction).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new "analysis" source directory that will contain the source for a new
static program analysis framework. To start the framework, add a CFG utility
that provides convenient iterators for iterating through the basic blocks of the
CFG as well as the predecessors, successors, and contents of each block.
The new CFGs are constructed using the existing CFGWalker, but they are
different in that the new utility is meant to provide a usable representation of
a CFG whereas CFGWalker is meant to allow collecting arbitrary information about
each basic block in a CFG.
For testing and debugging purposes, add `print` methods to CFGs and basic
blocks. This requires exposing the ability to print expression contents
excluding children, which was something we previously did only for StackIR.
Also add a new gtest file with a test for constructing and printing a CFG. The
test reveals some strange properties of the current CFG construction, including
empty blocks and strange placement of `loop` instructions, but fixing these
problems is left as future work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds an option to ignore effects in the parent in
getDroppedChildrenAndAppend. With that, this becomes usable in more
places, like Directize, basically in situations where we know we can ignore
effects in the parent (since we've inferred they are not needed). This lets
us get rid of some boilerplate code in Directize.
Diff without whitespace is a lot smaller. A large other part of the diff is a
rename of curr => parent which I think it makes it more readable as then
parent/children is a clear contrast, and then the new parameter "ignore/
notice parent effects" is obviously connected to "parent".
The top comment in drop.cpp is removed as it just duplicated the top
comment in the header drop.h.
This is basically NFC but using drop.h does bring the advantage of
emitting less code, see the test changes, so it is noticeable in the IR.
This is a refactoring PR in preparation for a larger improvement to
Directize that will also benefit from this new drop capability.
|
|
|
|
|
|
|
|
|
| |
#4191 meant to do that, I think, but only did so for "pattern B". This does it
for all patterns, and adds assertions.
In theory this could regress code that benefits from partial inlining of
"pattern A" (since this PR stops doing it by default), but I did not see a significant
difference on any benchmarks, and it is easy to re-enable that behavior by
doing --partial-inlining-ifs=1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes loops from having the effect "may trap (timeout)" to having
"may not return." The only noticeable difference is in TrapsNeverHappen mode,
which ignores the former but not the latter. So after this PR, in TNH mode we do
not optimize away an infinite loop that seems to have no other side effects. We
may also use this for other things in the future, like continuations/stack switching.
There are upsides and downsides to allowing the optimizer to remove infinite
loops (the C and C++ communities have had interesting discussions on that topic
over the years...) but it seems safer to not optimize them out for now, to let the
most code work properly. If a need comes up to optimize such code, we can look
at various options then (like a flag to ignore infinite loops).
See discussion in #5228
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TypeUpdater::remove must be called after removing a thing from the
tree. If not, then we can get confused by something like this:
(block $b
(br $b)
)
If we first call TypeUpdater::remove then we see that the block's only
br is going away, so it becomes unreachable. But when we then remove
the br then the block should have type none. Removing the br first
from the IR, and then calling TypeUpdater::remove, is the safe way to do it.
However, changing that order in Vacuum is not trivial. After looking into
this, I realized that it is just simpler to remove TypeUpdater entirely.
Instead, we can ReFinalize at the end unconditionally. This has the downside
that we do not propagate type updates "as we go", but that should be very
rare.
Another downside is that TypeUpdater tracks the number of brs, which
can help remove code like in the one test that regresses here (see comment
there). But I'm not sure that removal was valid - Vacuum should not really be
doing it, and it looks like its related to this bug actually. Instead, we have a
dedicated pass for removing unused brs - RemoveUnusedBrs - so leave
things for it.
This PR's benefit, aside from now handling the fuzz testcase, is that it makes
the code simpler and faster. I see a 10-25% speedup on the Vacuum pass on
various binaries I tested on. (Vacuum is one of our faster passes anyhow,
though, so the runtime of -O1 is not much improved.)
Another minor benefit might be that running ReFinalize more often can
propagate type info more quickly, thanks to #5704 etc. But that is probably
very minor.
|
|
|
| |
Do the port automatically using the port_passes_tests_to_lit.py script.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A cycle of data is something we can't just naively emit as wasm globals. If
at runtime we end up, for example, with an object A that refers to itself,
then we can't just emit
(global $A
(struct.new $A
(global.get $A)))
The struct.get is of this very global, and such a self-reference is invalid. So
we need to break such cycles as we emit them. The simple idea used here
is to find paths in the cycle that are nullable and mutable, and replace the
initial value with a null that is fixed up later in the start function:
(global $A
(struct.new $A
(ref.null $A)))
(func $start
(struct.set
(global.get $A)
(global.get $A)))
)
This is not optimal in terms of breaking cycles, but it is fast (linear time)
and simple, and does well in practice on j2wasm (where cycles in fact
occur).
|
|
|
|
|
|
|
|
|
|
|
| |
Each time we inline we put the contents in a block. Before we used the same name
each time we inlined the same method, and as a result had many conflicts if a
function was inlined many times. With this PR we emit a different name each
time.
This is not 100% NFC as it does change block names, which is observable in the IR
(as can be seen in the test updates).
This helps #5696 in speeding up UniqueNameManner.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We already did this for nullablilty, and so for the same reasons we should do it
for heap types as well. Also, I realized that doing so would solve #5703, which
is the new test added for TypeRefining here.
The fuzz bug solved here is that our analysis of struct gets/sets will skip
copy operations - a read from a field that is written into it. And we skip
fallthrough values while doing so, since it doesn't matter if the read goes
through an if arm or a cast. An if would automatically get a more precise
type during refinalize, so this PR does the same for a cast basically.
Fixes #5703
|
|
|
|
|
|
|
|
|
|
|
| |
Data/Elem (#5692)
ArrayNewSeg => ArrayNewSegData, ArrayNewSegElem
ArrayInit => ArrayInitData, ArrayInitElem
Basically we remove the opcode and use the class type to differentiate them.
This adds some code but it makes the representation simpler and more compact in
memory, and it will help with #5690
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The testcase here has a recursive call that is also nested in itself, something
like
(return
(call $me
(return
..
This found a bug in our return value removal logic. When we remove a return
value we both modify call sites (to add drops) and modify returns
(to remove their values). One of those uses pointers into the IR which the other
invalidated, so the order of the two matters. This PR just reorders that code to
fix the bug.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This workarounds the extra work around the edge case where;
- Function is too big to full-inline
- It is a candidate for partial inline
- Outlined version becomes eligible for full-inline.
In such a case, binaryen would introduce a temporary state with
partial inlined functions and later on inline them. J2CL hit this
scenario for String literal which resulted in significant
regressions in compilation time.
This patch updates partial inlining analysis to identify the
edge case and direct to full-inlining when that happens.
|
|
|
| |
Before this fix we would flip all data segments to use the first memory.
|
|
|
|
| |
We used to refine only for result changes, but param changes can
also lead to opportunities.
|
|
|
|
|
|
|
|
|
|
| |
We already deduplicated names in the names section (to defend against a weird
binary), but we also need to deduplicate the names of items not in the names
section, so they don't overlap with the names that are. See example in the testcase.
Normally wasm files use names for all items in each group. This only became
noticeable in some wasm-ctor-eval work where new temp globals were added
that were not given names.
|
|
|
|
|
| |
Leaks happen since we use std::shared_ptr which does not handle cycles.
But since Binaryen isn't used in long-running code it's probably find to just
let them leak, and ignore them in LSan, for now.
|
|
|
|
|
|
|
|
| |
`assert_exception` is similar to `assert_trap` but for exceptions, which
is supported in the interpreter of the EH proposal
(https://github.com/WebAssembly/exception-handling/tree/main/interpreter).
We've been using `assert_trap` for both traps and exceptions, but this
PR distinguishes them.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
arm (#5681)
The logic says that if an if/select has an arm that returns a null type, and the if/select
goes into a cast, then we can ignore that arm in tnh mode (as it would trap, and we
are ignoring the possibility of a trap). But it is not enough to return a null type - the
null must actually flow out, rather than say a return be executed before.
One existing test needed adjustment, as it used calls for "thing with effects". But a
call can transfer control flow when EH is enabled, and this pass has -all. Rather
than mess with the features, I switched the effects to be locals.
|
|
|
|
|
|
|
|
|
|
|
| |
This capability was originally introduced to support calculating LUBs in the
equirecursive type system, but has not been needed for anything except tests
since the equirecursive type system was removed. Since building basic heap types
is no longer useful and was a source of significant complexity, remove the APIs
that allowed it and the tests that used those APIs.
Also remove test/example/type-builder.cpp, since a significant portion of it
tested the removed APIs and the rest is already better tested in
test/gtest/type-builder.cpp.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Disable sign extension in SignExtLowering.cpp
The sign extension lowering pass would previously lower away the sign extension
instructions, but it wouldn't disable the sign extension feature, so follow-on
passes such as optimize-instructions could reintroduce sign extension
instructions.
Fix the pass to disable the sign extension feature to prevent sign extension
instructions from being reintroduced later.
* update pass description
* Disable the memory64 feature in Memory64Lowering.cpp
For consistency with other feature lowering passes, disable memory64 in addition
to lowering its use away. Although no other passes would introduce new uses of
memory64 at the moment, this makes the lowering pass more robust against a
future where memory64 might accidentally be reintroduced after being lowered away.
* Update test/lit/passes/memory64-lowering-features.wast
Co-authored-by: Alon Zakai <azakai@google.com>
---------
Co-authored-by: Alon Zakai <azakai@google.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Disable sign extension in SignExtLowering.cpp
The sign extension lowering pass would previously lower away the sign extension
instructions, but it wouldn't disable the sign extension feature, so follow-on
passes such as optimize-instructions could reintroduce sign extension
instructions.
Fix the pass to disable the sign extension feature to prevent sign extension
instructions from being reintroduced later.
* update pass description
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
immediately (#5673)
Emit an unreachable, but guarded by a block as we do in other cases in this pass, to avoid
having unreachable code that is not fully propagated during the pass (as we only do a full
refinalize at the end). See existing comments starting with
"Make sure to emit a block with the same type as us" in the pass.
This is mostly not a problem with other casts, but ref.test returns an i32 which we have
lots of code that tries to optimize.
|
|
|
|
|
| |
And since the only type system left is the standard isorecursive type system,
remove `TypeSystem` and its associated APIs entirely. Delete a few tests that
only made sense under the isorecursive type system.
|
|
|
|
|
|
|
| |
Before this PR we hit the assert on the type not being basic.
We could also look into fixing the caller to skip bottom types, but as
bottom types trivially have no subtypes, it is more future-facing to
simply handle it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we emit e.g. a struct.get's reference, this PR makes us prefer a non-nullable
value, and even to reuse an existing local if possible. By doing that we reduce
the risk of a trap, and also by using locals we end up testing operations on the
same data, like this:
x = new A();
x.a = ..
foo(x.a)
In contrast, without this PR each of those x. uses might be new A().
|
|
|
|
|
| |
After this change, the only type system usable from the tools will be the
standard isorecursive type system. The nominal type system is still usable via
the API, but it will be removed entirely in a follow-on PR.
|
| |
|
| |
|
|
|
|
| |
These tests were easy to remove --nominal from because they already worked with
the standard type system as well.
|
|
|
|
| |
Remove stale check lines for --nominal mode checks that were removed in #5660
and rerun auto_update_checks.py to ensure the checks are stable.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In preparation to remove the nominal type system, which is nonstandard and not
usable for modules with nontrivial external linkage requirements, port an
initial batch of tests to use the standard isorecursive type system.
The port involves reordering input types to ensure that supertypes precede their
subtypes and inserting rec groups to ensure that structurally identical types
maintain their separate identities.
More tests will be ported in future PRs before the nominal type system is
removed entirely.
|
|
|
|
|
|
|
|
|
|
|
| |
Casting (ref nofunc) to (ref func) seems like it can succeed based on the rule
of "if it's a subtype, it can cast ok." But the fuzzer found a corner case where that
leads to a validation error (see testcase).
Refactor the cast evaluation logic to handle uninhabitable refs directly, and
return Unreachable for them (since the cast cannot even be reached).
Also reorder the rule checks there to always check for a non-nullable cast
of a bottom type (which always fails).
|
|
|
|
|
|
|
|
| |
Without this, in certain complex operations we could end up calling a nested
make() operation that included nontrivial things, which could cause problems.
The specific problem I encountered was in fixAfterChanges() we tried to fix up
a duplicate label, but calling makeTrivial() emitted something very large that
happened to include a new block with a new label nested under a struct.get,
and that block's label conflicted with a label we'd already processed.
|
| |
|
|
|
|
|
|
|
| |
Without the hint, we always look for a valid name using name$0, $1, $2, etc.,
starting from 0, and in some cases that can lead to quadratic behavior.
Noticed on a testcase in the fuzzer that runs for over 24 seconds (I gave up at
that point) but takes only 2 seconds with this.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Technically we need to filter both before and after combining, that is,
if a location's contents will be filtered by F() then if the new contents
are x and old contents y then we need to end up with
F(F(x) U F(y)). That is, filtering before is necessary to ensure the union
of content does not end up unnecessarily large, and the filtering
after is necessary to ensure the final result is properly filtered to fit.
(If our representation were perfect then this would not be needed, but
it is not, as the union of two exact types can end up as a very large
cone, for example.)
For efficiency we have been filtering afterwards. But that is not enough
for packed fields, it turns out, where we must filter before. If we don't,
then if inputs 0 and 0x100 arrive to an i8 field then combining them
we get "unknown integer" (which is then filtered by 0xff, but it's too
late). By filtering before, the actual values are both 0 and we end up
with that as the only possible value.
It turns out that filtering before is enough for such fields, so do only
that.
|
|
|
| |
Like data.drop etc., it notices data segment identity.
|
|
|
|
|
| |
We depended on ReFinalize doing it for us, and that usually works, but there is
a corner case that depends on knowing all the type changes being done. So use
our complete information to update those types in the pass.
|
|
|
|
|
|
|
|
| |
Do not optimize out or split segments that are referred to array.init_data
instructions. Fixes a bug where segments could get optimized out, producing
invalid modules. Doing the work to actually split segments used by
array.init_data is left for the future.
Also fix a latent UBSan failure revealed by the new test case.
|
|
|
|
|
| |
The fuzzer found another case we were missing. I realized that we can just
check for this in replaceCurrent, at least for places that call that method,
which is the common case. So this simplifies the code while fixing a bug.
|
|
|
|
|
| |
The same bug was present in both: We ignored packing, so writing a larger
value than fits in the field would lead to us propagating that original value.
|