| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
| |
We iterate on that data structure in two loops, and the fuzzer found a case
where the difference in ordering actually ended up mattering in the output.
|
| |
|
|
|
|
|
|
| |
Also, fix bug where pointer was being used direcltly to
index into Int32Array. I suppose this code had basically
zero users until I tried to land this change in emscripten:
https://github.com/emscripten-core/emscripten/pull/15742
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The general pattern is
if (!global) { global = 1 }
This PR generalizes that to handle nested appearances,
if ({
if (!global) { global = 1 }
!global
}) {
global = 1
}
With this I can finally see no more "once" global operations on the hottest
function in the currently slowest j2wasm benchmark ("filter").
Also added a failing testcase for something we do not handle yet.
|
|
|
|
|
|
|
| |
We have always cached nominal signature types keyed on their signatures to avoid
creating extra nominal types through the `HeapType::HeapType(Signature)`
constructor. However, that logic was previously built into the HeapTypeInfo
canonicalization system. To allow that system to be simplified in future PRs,
separate the caching into its own explicit layer.
|
|
|
|
| |
We don't use those effects now in any way, and if we need them some day
we can add them back. For now they just add overhead and complexity.
|
|
|
|
|
|
|
| |
Types and HeapTypes are inserted into their respective stores either by copying
a reference to a `TypeInfo` or `HeapTypeInfo` or by moving a
`std::unique_ptr<TypeInfo>` or `std::unique_ptr<HeapTypeInfo>`. Previously these
two code paths had separate, similar logic. To reduce deduplication, combine
both code paths into a single method.
|
| |
|
|
|
|
|
|
|
|
|
| |
When a wasm exception is thrown and uncaught in the interpreter, it
caused the whole interpreter to crash, rather than gracefully reporting
it. This fixes the problem, and also compares whether an uncaught
exception happened when comparing the results before and after
optimizations in `--fuzz-exec`. To do that, when `--fuzz-exec` is given,
we now compare results even when the function does not have return
values. Logs for some existing test have changed because of this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do some postprocessing after parsing `Try` to make sure `delegate`
only targets `try`s and not `block`s:
https://github.com/WebAssembly/binaryen/blob/9659f9b07c1196447edee68fe04c8d7dd2480652/src/wasm/wasm-binary.cpp#L6404-L6426
But in case the outer `try` has neither of `catch` nor `delegate`, the
previous code just return prematurely, skipping the postprocessing part,
resulting in a binary parsing error. This PR removes that early-exiting
code.
Some test outputs have changed because `try`s are assigned labels after
the early exit. But those labels can be removed by other optimization
passes when there is no inner `rethrow` or `delegate` that targets them.
(On a side note, the restriction that `delegate` cannot target a `block`
has been removed a few months ago in the spec, so if a `delegate`
targets a `block`, it means it is just rethrown from that block. But I
still think this is a convenient invariant to hold at least within the
binaryen IR. I'm planning to allow parsing of `delegate` targeting
`block`s later, but I will make them point to `try` when read in the
IR. At the moment the LLVM toolchain does not generate such code.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for try-delegate in `EffectAnalyzer`. Without this
support, the expresion below has been incorrectly classified as "cannot
throw", because the previous code considered everything inside
`try`-`catch_all` as "cannot throw". This is not the case when there is
a `delegate` that can bypass the `catch_all`.
```wasm
try $l0
try
try
throw $e
delegate $l0
catch_all
end
end
|
|
|
|
|
|
|
|
| |
(#4338)
(i32(x) < 0) | (i32(y) < 0) ==> i32(x | y) < 0
(i32(x) != 0) | (i32(y) != 0) ==> i32(x | y) != 0
Likewise for i64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously this pass would see something like this and fail:
if (foo() + global) {
global = 1;
}
The call to foo() has side effects, so we did not optimize. However, in such
a case the side effects are safe: they happen anyhow, regardless of the global
that we are optimizing. That is, "global" is read only to be written, even though
other things also influence the decision to write it. But "global" is not used in a
way that is observable: we can remove it, and nothing will notice (except for
things getting smaller/faster).
In other words, this PR will let us optimize the above example, while it also
needs to avoid optimizing the dangerous cases, like this:
if (foo(global)) {
global = 1;
}
Here "global" flows into a place that notices its value and may use it aside
from deciding to write that global.
A common case where we want to optimize is combined ifs,
if (foo()) {
if (global) {
global = 1;
}
}
which the optimizer turns into
if (foo() & global) {
global = 1;
}
With this PR we can handle those things too.
This lets us optimize out some important globals in j2wasm like the initializer
boolean for the Math object, reducing some total 0.5% of code size.
|
|
|
|
|
|
| |
PassOptions is a fairly large structure and even includes a std::map. I also
have plans to add further fields there to make it even larger. Before doing that
I noticed that in some places we copy it instead of being consistent and taking
it by reference, which this PR fixes.
|
|
|
| |
This adds handling of try in the Flatten pass.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes the old hardcoded value numbering in that pass and makes
it use the new code that was split into helper code. The immediate benefit
of this is to make the code aware of identical constants: if two locals have
the same constant then they do not interfere. Future improvements to
numbering will also automatically help here.
This changes some constants in existing tests so that they keep testing
what they were testing before, and adds new tests for the new benefit here.
This implements a proposed TODO from #4314
|
|
|
|
|
|
| |
Its seems that with this emscripten change DCE is able to remove
the `assert` JS runtime function making this call to assert fail
with `ReferenceError: assert is not defined`.
|
|
|
|
| |
(#4356)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
With nominal function types, this change makes it so that we preserve the
identity of the function type used with call_indirect instructions rather than
recreating a function heap type, which may or may not be the same as the
originally parsed heap type, from the function signature during module writing.
This will simplify the type system implementation by removing the need to store
a "canonical" nominal heap type for each unique signature. We previously
depended on those canonical types to avoid creating multiple duplicate function
types during module writing, but now we aren't creating any new function types
at all.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds `EHUtils::handleBlockNestedPops`, which can be called at the
end of passes that has a possibility to put `pop`s inside `block`s. This
method assumes there exists a `pop` in a first-descendant line, even
though it can be nested within a block. This allows a `pop` to be nested
within a `block` or a `try`, but not a `loop`, since that means the
`pop` can run multile times. In case of `if`, `pop` can exist only in
its condition; if a `pop` is in its true or false body, that's not in
the first-descendant line.
This can be useful when optimization passes create blocks to do
transformations. Wrapping expressions wiith a block does not change
semantics most of the time, but if pops happen to be inside a block
generated by those passes, they can result in invalid binaries.
To test this, this adds `passes/test_passes.cpp`, which is intended to
contain multiple test passes that test a single (or more) utility
functions separately. Without this kind of pass, it is hard to test
various cases in which nested `pop`s can be generated in existing
passes. This PR also adds `PassRegistry::registerTestPass`, which
registers a pass that's intended only for internal testing and does not
show up in `wasm-opt --help`.
Fixes #4237.
|
| |
|
|
|
|
|
| |
Check that types that were meant to have a subtype relationship actually do. To
expose the intended subtyping to the fuzzer, expose `subtypeIndices` in the
return value of the type generation function.
|
|
|
| |
This just moves code out of RedundantSetElimination.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is fairly short and simple after the recent refactorings. This basically
just finds all uses of each signature/function type, and then sees if it
receives more specific types as params. It then rewrites the types if so.
This just handles arguments so far, and not return types.
This differs from DeadArgumentElimination's refineArguments() in that
that pass modifies each function by itself, changing the type of the
function as needed. That is only valid if the type is not observable, that
is, if the function is called indirectly then DAE ignores it. This pass will
work on the types themselves, so it considers all functions sharing a
type as a whole, and when it upgrades that type it ends up affecting them
all.
This finds optimization opportunities on 4% of the total signature
types in j2wasm. Those lead to some benefits in later opts, but the
effect is not huge.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we work toward allowing nominal and structural types to coexist, any
difference in how they can be built or used will be an inconvenient footgun that
we will have to work around. In the spirit of reducing the differences between
the type systems, allow TypeBuilder to construct basic HeapTypes in nominal mode
just as it can in equirecursive mode.
Although this change is a net increase in code complexity for not much
benefit (wasm-opt never needs to build basic HeapTypes), it is also an
incremental step toward getting rid of separate type system modes, so I expect
it to simplify other PRs in the near future.
This change also uncovered a bug in how the type fuzzer generated subtypes of
basic HeapTypes. The generated subtypes did not necessarily have the intended
`Kind`, which caused failures in nominal subtype validation in the fuzzer.
|
|
|
|
|
|
|
|
| |
Fairly simple, this uses the existing infrastructure to find opportunities
to refine the type of a global variable. This a common pattern in j2wasm
for example, where a global begins as a null of $java.lang.Object (the
least specific type) but it is in practice always assigned an object of
some specific type.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is common in GC code to have stuff like this:
x = null;
..
x = Data();
Nulls in wasm have a type, and if that initial null has say anyref then
before this PR we would keep the type of x as anyref. However,
while nulls have types, all null values are identical, and so we can
in fact change x's type to a nullable reference of Data, by also
changing the null's type to something more specific.
LUBFinder now has an API that can return the best possible LUB
so far, and that can be told to update nulls if we decide that the
new LUB is worth using. This updates the passes using LUBFinder
to use the new API. Note how TypeRefining becomes simpler
because the special logic it had in a subclass of LUBFinder is now
part of the main class (it used to remember if there was a null
default; LUBFinder now handles both a null default as well as
other nulls).
This requires some changes to existing tests to avoid them from
optimizing using nulls in ways that ends up not testing the
original intent. Specifically the dae-gc-refine-params.wast now
has calls to get a null of a type, instead of just having a ref.null
of that type (which could be optimized now). And
dae-gc-refine-return uses locals instead of ref.nulls.
|
|
|
|
|
|
|
| |
- Do not require defaultable types in function returns
- Increase likelihood of `none` function return types
- Correctly generate subtypes of basic types
- Actually check output in tests
- Print to cout instead of cerr
|
|
|
|
|
|
| |
(#4336)
(i32(x) != 0) | (i32(y) != 0) ==> i32(x | y) != 0
(i64(x) != 0) | (i64(y) != 0) ==> i64(x | y) != 0
|
|
|
|
|
|
| |
(#4333)
(i32(x) == 0) & (i32(y) == 0) ==> i32(x | y) == 0
(i64(x) == 0) & (i64(y) == 0) ==> i64(x | y) == 0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new fuzzer binary that repeatedly generates random types to find bugs in
the type system implementation. Each iteration creates some number of root types
followed by some number of subtypes thereof. Each built type can contain
arbitrary references to other built types, regardless of their order of
construction.
Right now the fuzzer only finds fatal errors in type building (and in its own
implementation), but it is meant to be extended to check other properties in the
future, such as that LUB calculations work as expected.
The logic for creating types is also intended to be integrated into the main
fuzzer in a follow-on PR so that the main fuzzer can fuzz with arbitrarily more
interesting GC types.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds relaxed-simd instructions based on the current status of the
proposal
https://github.com/WebAssembly/relaxed-simd/blob/main/proposals/relaxed-simd/Overview.md.
Binary opcodes are based on what is listed in
https://github.com/WebAssembly/relaxed-simd/blob/main/proposals/relaxed-simd/Overview.md#binary-format.
Text names are not fixed yet, and some sort sort of names that maps to
the non-relaxed versions are chosen for this prototype.
Support for these instructions have been added to LLVM via builtins,
adding support here will allow Emscripten to successfully compile files
that use those builtins.
Interpreter support has also been added, and they delegate to the
non-relaxed versions of the instructions.
Most instructions are implemented in the interpreter the same way as the non-relaxed
simd128 instructions, except for fma/fms, which is always fused.
|
|
|
|
|
| |
The order of operations could allow us to add vars but then later decide
not to do the optimization due to unreachability. And then we did not do a
fixup for non-nullability for those args, leading to a fuzzer error.
|
|
|
|
|
|
|
|
| |
Found by the fuzzer. Calling makeZero on an rtt with depth will
error because we try to create a zero Literal from it, and we can't
do that - we don't know a list of super types to give it. We could
work around it, but we don't want to: if the rtt has depth then we
can't make a nice zero for it, we'd need some rtt.subs anyhow,
so simply mark it as a type we can't make a zero for.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#4307)
i64.extend_i32_u(i32.load8_u(x)) -> i64.load8_u(x)
i64.extend_i32_u(i32.load16_u(x)) -> i64.load16_u(x)
i64.extend_i32_s(i32.load8_u(x)) -> i64.load8_u(x)
i64.extend_i32_s(i32.load16_u(x)) -> i64.load16_u(x)
i64.extend_i32_s(i32.load8_s(x)) -> i64.load8_s(x)
i64.extend_i32_s(i32.load16_s(x)) -> i64.load16_s(x)
i64.extend_i32_u(i32.load(x))) -> i64.load32_u(x)
i64.extend_i32_s(i32.load(x))) -> i64.load32_s(x)
don't apply to
i64.extend_i32_u(i32.load8_s(x)) -> skip
i64.extend_i32_u(i32.load16_s(x)) -> skip
i64.extend_i32_s(i32.atomic.load(x)) -> skip
|
|
|
| |
This just moves code around.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this we had special logic for various call types. This replaces all
that with a single general code path, which unifies everything except
for control flow constructs (which remain as before, handled in a
special way for each of them).
The algorithm is simple and direct, basically it goes through the
children and when it finds a block, it sees if it can move the block's
contents outside of the parent. While doing so it takes into account
effects and so forth.
To make this easy, a random-access API is added to ChildIterator.
Diff without whitespace makes the existing test updates a lot simpler.
Note that this is not NFC as the old algorithm had some quirks like
not taking into account effects when there were more than 2 children;
the new code is uniform in how it handles things.
This ends up removing 19% of all blocks in j2wasm, which reduces
1% of total code size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#4319)
Before, if we saw a param is written, that prevented us from subtyping it:
function foo(x : oldType) {
..
x = someValue;
..
}
Even if all calls to foo send some specific struct type that we'd like to subtype
to, seeing that write stopped us. To handle such a write we need to do some
extra handling for the case in which it is written a less-specific type (that is,
if someValue is of type oldType, something like this:
function foo(x : newType) {
var x_old : oldType;
x_old = x; // copy the param to x_old, and use x_old everywhere
..
x_old = someValue;
..
}
That is, still refine the param type, but inside the function use a new local that
has the old type, and is guaranteed to validate. This PR implements that logic
so that we can optimize more cases.
To allow that, this PR avoids trying to both refine a type and remove a param as
being unused - that has annoying corner cases. If it is unused, we can simply
remove it anyhow.
|
|
|
|
|
|
|
|
|
| |
This specializes the fields of structs based on the types written to them. That is,
if a field is of type A but in practice we always write some subtype B to it
then we can change the type of the field to that.
On j2wasm this manages to improve at least one field in 2% of types. Not a
large amount, but this does lead to further benefits in later opts (e.g. about a third
of the improvements are to turn a field non-nullable).
|
|
|
|
|
|
| |
Tiny followup to #4314
Also updates some function types in test output, fixing breakage on
main after racing landings.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old algorithm can be summarized as: In each basic block, start at the beginning.
Each pair of live locals there might interfere with each other, as they might arrive from
different entry blocks with different values. Afterwards, go through the block and find
overlapping live ranges, and mark interferences there as well.
This is non-linear because at the start of the block we do a double-loop over all
pairs of live locals, which in general can be O(N^2) (N - number of locals). It also
has the downside of ignoring copies: if two locals have overlapping live ranges but
they must have identical values on those ranges, they do not actually interfere,
for example
x = 10;
y = x;
.. // live ranges overlap here
foo(x, y); // live ranges end here.
We can ignore this overlap since the copy shows they are identical there, but the
pass did not take this into account. To some extent other passes can remove such
copies (SimplifyLocals, MergeLocals, RedundantSetElimination), but in general
this was a weak spot for the optimizer.
I realized there is a solution to both these problems: In Wasm, given that we have
a default value for all locals, if a local is live at the start of a block then it must be
live at the end of all the blocks reaching it. That is so because the liveness will
extend backwards all the way to some set of the local, possibly all the way to
the zero-initialization at the start of the function, and it extends that way through
all predecessor blocks. A consequence of this is that there are no interferences
between locals that only occur during a merge: The live ranges include the
predecessor blocks, and theirs, and so forth, until we reach a block where one
of the locals is assigned a value different than the other. That is a necessary and
sufficient condition for intererence, and therefore when processing a block we
only need to look at its contents, and can ignore the merging of control flow,
which allows us to be linear.
More details on this and on the new algorithm in comments in the source, but
the basic idea is that it simply goes through each block in a linear way, finding
which values are assigned to each local (using a numbering of unique values),
and noting which are live at each time. If two locals are live and one is assigned
a value that is not the same as the value in the other, mark them as interfering.
This is of substantial benefit to j2wasm output, I believe because it is common
there to find local subexpression elimination opportunities after inlining, and
each time we find one we add a local. If we inline different functions into the
same target, we may end up with copied locals for each of them. (This was
not noticed in the past because it is very rare on LLVM output, which has
already had inlining and GVN etc. done.)
There is a small benefit to LLVM output as well, though just a few
percent at best. However, it is enough to be noticeable on some of
the code size tests.
This is also faster than the previous pass. It's normally not noticeable
as this pass is not one of the slowest anyhow, but I found some real-world
codebases where the pass becomes 50% faster. I have not found any
case where it is slower than the old algorithm.
Fuzzed over several days to be sure this is correct, and also verified
on the emscripten test suite.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We found one cast that has another as its input, and forgot that
the child was possibly a fallthrough value. That is, there might be more
code that needs to be kept around.
Rather than fix the middle of the three cases there - the one with
HeapType::isSubType(childIntendedType, intendedType) - I
noticed it is not actually needed. That case checks if the child's
type is more specific than the parent's, and if so, then the parent
is not needed. But we already handle that earlier above in the
same function: regardless of what the child of a static cast is,
if the cast is static and the input is the proper type already, the
cast is unneeded (see lines 1565-1566).
|
|
|
|
| |
This avoids cluttering the main wasm namespace, and clarifies what the
scanner does.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves validation of `catch` bodies mostly by checking the
validity of `pop`s.
For every `catch` body:
- Checks if its tag exists
- If the tag's type is none:
- Ensures there shouldn't be any `pop`s
- If the tag's type is not none:
- Checks if there's a single `pop` within the catch body
- Checks if the tag type matches the `pop`'s type
- Checks if the `pop`'s location is valid
For every `catch_all` body:
- Ensures there shuldn't be any `pop`s
This uncovers several bugs related to `pop`s in existing tests, which
this PR also fixes.
|