| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We may inline multiple times into a single function. Previously, if we did so, we
did the "fixups" such as ReFinalize and non-nullable local fixes once per such
inlining. But that is wasteful as each ReFinalize etc. scans the whole function,
and could be done after we copy all the code from all the inlinings, which is
what this PR does: it splits doInlining() into one function that inlines code and
one that does the updates after, and the update is done after all inlinings.
This turns out to be very important, a 5x speedup on two large real-world wasm
files I am looking at. The reason is that we actually inline more than once in
half the cases, and sometimes far more - in one case we inline over 1,000
times into a function! (and ReFinalized 1,000 times too many)
This is practically NFC, but it turns out that there are some tiny noticeable
differences between running ReFinalize once at the end vs. once after each
inlining. These differences are not really functional or observable in the
behavior of the code, and optimizations would remove them anyhow, but
they are noticeable in two tests here. The changes to tests are, in order:
* Different block names, just because the counter we use sees more things.
* In a testcase with unreachable code, we inline twice into a function, and
the first inlining brings in an unreachable, and ReFinalizing early will lead
to it propagating differently than if we wait to ReFinalize. (It actually leads
to another cycle of inlining in that case, as a fluke.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
We previously supported (and primarily used) a non-standard text format for
conditionals in which the condition, if-true expression, and if-false expression
were all simply s-expression children of the `if` expression. The standard text
format, however, requires the use of `then` and `else` forms to introduce the
if-true and if-false arms of the conditional. Update the legacy text parser to
require the standard format and update all tests to match. Update the printer to
print the standard format as well.
The .wast and .wat test inputs were mechanically updated with this script:
https://gist.github.com/tlively/85ae7f01f92f772241ec994c840ccbb1
|
|
|
|
|
|
|
|
|
|
| |
Previously the lit test update script interpreted module names as the names of
import items and export names as the names of export items, but it is more
precise to use the actual identifiers of the imported or exported items as the
names instead.
Update update_lit_checks.py to use a more correct regex to match names and to
correctly use the identifiers of import and export items as their names. In some
cases this can improve the readability of test output.
|
|
|
|
| |
We don't have `*.fromasm` files anymore. Also `BIN_DIR` and
`WATERFALL_BUILD_DIR` variables don't seem to be used as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A trivial call is something like a function that just calls another immediately,
function foo(x, y) {
return bar(y, 15);
}
We can inline those and expect to benefit in most cases, though we might
increase code size slightly. Hence it makes sense to inline such cases, even
though in general we are careful and do not inline functions with calls in
them; a "trampoline" like that likely has most of the work in the call itself,
which we can avoid by inlining.
Suggested based on findings in Java.
|
|
|
|
|
|
|
|
|
| |
LocalCSE is nice for large expressions, but for small things it has always been of
unclear benefit since VMs also do GVN/CSE anyhow. So we are likely not speeding
anything up, but hopefully we are reducing code size at least. Doing LocalCSE on
something small like a global.get is very possibly going to increase code size,
however (since we add a tee, and since the local gets are of similar size to global
gets - depends on LUB sizes). On real-world Java code that overhead is noticeable,
so this PR makes us more careful, and we skip things of size 1 (no children).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When printing Binaryen IR, we previously generated names for unnamed heap types
based on their structure. This was useful for seeing the structure of simple
types at a glance without having to separately go look up their definitions, but
it also had two problems:
1. The same name could be generated for multiple types. The generated names did
not take into account rec group structure or finality, so types that differed
only in these properties would have the same name. Also, generated type names
were limited in length, so very large types that shared only some structure
could also end up with the same names. Using the same name for multiple types
produces incorrect and unparsable output.
2. The generated names were not useful beyond the most trivial examples. Even
with length limits, names for nontrivial types were extremely long and visually
noisy, which made reading disassembled real-world code more challenging.
Fix these problems by emitting simple indexed names for unnamed heap types
instead. This regresses readability for very simple examples, but the trade off
is worth it.
This change also reduces the number of type printing systems we have by one.
Previously we had the system in Print.cpp, but we had another, more general and
extensible system in wasm-type-printing.h and wasm-type.cpp as well. Remove the
old type printing system from Print.cpp and replace it with a much smaller use
of the new system. This requires significant refactoring of Print.cpp so that
PrintExpressionContents object now holds a reference to a parent
PrintSExpression object that holds the type name state.
This diff is very large because almost every test output changed slightly. To
minimize the diff and ease review, change the type printer in wasm-type.cpp to
behave the same as the old type printer in Print.cpp except for the differences
in name generation. These changes will be reverted in much smaller PRs in the
future to generally improve how types are printed.
|
|
|
|
|
|
|
| |
Followup to #5860, this does the same for LocalCSE.
As there, this is valid because it's ok if we branch away. This pass adds a local.tee of
a reused value and then gets it later, and it's ok to add a tee even if we branch away
and do not use it.
|
|
|
|
|
|
|
|
|
|
|
| |
SimplifyLocals (#5860)
This addresses most of the minor regression from the correctness fix in #5857.
That PR makes us consider calls as branching, but in some cases it is ok to
ignore that branching (see the comment in the code here), which this PR allows as
an option.
This undoes one test change from that PR, showing it undoes the regression for
SimplifyLocals. More tests are added to cover this specifically as well.
|
|
|
|
|
|
|
|
|
|
| |
All top-level Module elements are identified and referred to by Name, but for
historical reasons element and data segments were referred to by index instead.
Fix this inconsistency by using Names to refer to segments from expressions that
use them. Also parse and print segment names like we do for other elements.
The C API is partially converted to use names instead of indices, but there are
still many functions that refer to data segments by index. Finishing the
conversion can be done in the future once it becomes necessary.
|
|
|
|
|
|
|
|
|
|
|
|
| |
We must refinalize as inlining unreachable code can lead to
more things becoming unreachable.
We also must uniquify label names before refinalizing, as the
IR must be valid at that time, so that code is moved.
This causes some minor changes to existing test code (some
label changes, and refinalization makes more things
unreachable), but only the two new tests show actual problems
that needed to be fixed.
|
|
|
|
|
|
|
|
|
|
| |
Pretty simple logic bug, but it ended up causing us to not optimize sometimes.
Sadly the original tests happened to not have anything that depended on the
index in isolation.
Fix + add comprehensive tests for using that index properly. Also test the
call.without.effects intrinsic, which is orthoginal to this, but also worth testing
as it is a big use case here.
|
|
|
| |
See: https://reviews.llvm.org/D125728
|
|
|
|
|
|
|
|
|
|
|
| |
This can help in rare cases in MVP wasm, say for the return value of a block. But for
wasm GC it is very important due to casts.
Similar logic was added as part of #5194 for SimplifyLocals. It should probably have
been in a separate PR then. This does the right thing for RedundantSetElimination,
as a separate PR. Full tests will appear in that later PR (it is not really possible to test
the GC side yet - we need the logic in the later PR that actually switches to a more
refined local index when available).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the pass only pushed past an if or a br_if. This does the same but into an
if arm. On Wasm GC for example this can perform allocation sinking:
function foo() {
x = new A();
if (..) {
use(x);
}
}
=>
function foo() {
if (..) {
x = new A(); // this moved
use(x);
}
}
The allocation won't happen if we never enter the if. This helps wasm MVP too,
and in fact some existing tests benefit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
x ? 0 : y ==> z & y where z = !x
x ? y : 1 ==> z | y where z = !x
Only do this when we have z = !x, that is, we can invert x without adding
an actual eqz (which would add work).
To do this, canonicalize selects to prefer to flip the arms, when
possible, if it would move a constant to a location that the existing
optimizations already turn into an and/or. That is,
x >= 5 ? 0 : y != 42
would be canonicalized into
x < 5 ? y != 42 : 0
and existing opts turn that into
(x < 5) & (y != 42)
The canonicalization does not always help this optimization, as we need
the values to be boolean to do this, but canonicalizing is still nice to get
more regular code which might compress slightly better.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the wat parser would turn this input:
(block
(nop)
)
into something like this:
(block $block17
(nop)
)
It just added a name all the time, in case the block is referred to by an index
later even though it doesn't have a name.
This PR makes us rountrip more precisely by not adding such names: if there
was no name before, and there is no break by index, then do not add a name.
In addition, this will be useful for non-nullable locals since whether a block has
a name or not matters there. Like #4912, this makes us more regular in our
usage of block names.
|
|
|
| |
I was reading these tests and failing to find the names script.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Updating wasm.h/cpp for DataSegments
* Updating wasm-binary.h/cpp for DataSegments
* Removed link from Memory to DataSegments and updated module-utils, Metrics and wasm-traversal
* checking isPassive when copying data segments to know whether to construct the data segment with an offset or not
* Removing memory member var from DataSegment class as there is only one memory rn. Updated wasm-validator.cpp
* Updated wasm-interpreter
* First look at updating Passes
* Updated wasm-s-parser
* Updated files in src/ir
* Updating tools files
* Last pass on src files before building
* added visitDataSegment
* Fixing build errors
* Data segments need a name
* fixing var name
* ran clang-format
* Ensuring a name on DataSegment
* Ensuring more datasegments have names
* Adding explicit name support
* Fix fuzzing name
* Outputting data name in wasm binary only if explicit
* Checking temp dataSegments vector to validateBinary because it's the one with the segments before we processNames
* Pass on when data segment names are explicitly set
* Ran auto_update_tests.py and check.py, success all around
* Removed an errant semi-colon and corrected a counter. Everything still passes
* Linting
* Fixing processing memory names after parsed from binary
* Updating the test from the last fix
* Correcting error comment
* Impl kripken@ comments
* Impl tlively@ comments
* Updated tests that remove data print when == 0
* Ran clang format
* Impl tlively@ comments
* Ran clang-format
|
|
|
|
|
|
| |
(#4336)
(i32(x) != 0) | (i32(y) != 0) ==> i32(x | y) != 0
(i64(x) != 0) | (i64(y) != 0) ==> i64(x | y) != 0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this we had special logic for various call types. This replaces all
that with a single general code path, which unifies everything except
for control flow constructs (which remain as before, handled in a
special way for each of them).
The algorithm is simple and direct, basically it goes through the
children and when it finds a block, it sees if it can move the block's
contents outside of the parent. While doing so it takes into account
effects and so forth.
To make this easy, a random-access API is added to ChildIterator.
Diff without whitespace makes the existing test updates a lot simpler.
Note that this is not NFC as the old algorithm had some quirks like
not taking into account effects when there were more than 2 children;
the new code is uniform in how it handles things.
This ends up removing 19% of all blocks in j2wasm, which reduces
1% of total code size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old algorithm can be summarized as: In each basic block, start at the beginning.
Each pair of live locals there might interfere with each other, as they might arrive from
different entry blocks with different values. Afterwards, go through the block and find
overlapping live ranges, and mark interferences there as well.
This is non-linear because at the start of the block we do a double-loop over all
pairs of live locals, which in general can be O(N^2) (N - number of locals). It also
has the downside of ignoring copies: if two locals have overlapping live ranges but
they must have identical values on those ranges, they do not actually interfere,
for example
x = 10;
y = x;
.. // live ranges overlap here
foo(x, y); // live ranges end here.
We can ignore this overlap since the copy shows they are identical there, but the
pass did not take this into account. To some extent other passes can remove such
copies (SimplifyLocals, MergeLocals, RedundantSetElimination), but in general
this was a weak spot for the optimizer.
I realized there is a solution to both these problems: In Wasm, given that we have
a default value for all locals, if a local is live at the start of a block then it must be
live at the end of all the blocks reaching it. That is so because the liveness will
extend backwards all the way to some set of the local, possibly all the way to
the zero-initialization at the start of the function, and it extends that way through
all predecessor blocks. A consequence of this is that there are no interferences
between locals that only occur during a merge: The live ranges include the
predecessor blocks, and theirs, and so forth, until we reach a block where one
of the locals is assigned a value different than the other. That is a necessary and
sufficient condition for intererence, and therefore when processing a block we
only need to look at its contents, and can ignore the merging of control flow,
which allows us to be linear.
More details on this and on the new algorithm in comments in the source, but
the basic idea is that it simply goes through each block in a linear way, finding
which values are assigned to each local (using a numbering of unique values),
and noting which are live at each time. If two locals are live and one is assigned
a value that is not the same as the value in the other, mark them as interfering.
This is of substantial benefit to j2wasm output, I believe because it is common
there to find local subexpression elimination opportunities after inlining, and
each time we find one we add a local. If we inline different functions into the
same target, we may end up with copied locals for each of them. (This was
not noticed in the past because it is very rare on LLVM output, which has
already had inlining and GVN etc. done.)
There is a small benefit to LLVM output as well, though just a few
percent at best. However, it is enough to be noticeable on some of
the code size tests.
This is also faster than the previous pass. It's normally not noticeable
as this pass is not one of the slowest anyhow, but I found some real-world
codebases where the pass becomes 50% faster. I have not found any
case where it is slower than the old algorithm.
Fuzzed over several days to be sure this is correct, and also verified
on the emscripten test suite.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Canonicalize:
(signed)x > -1 ==> x >= 0
(signed)x <= -1 ==> x < 0
(signed)x < 1 ==> x <= 0
(signed)x >= 1 ==> x > 0
(unsigned)x < 1 ==> x == 0
(unsigned)x >= 1 ==> x != 0
This should help #4265, and in general 0 is usually a more
common constant, and reasonable to canonicalize to.
|
|
|
|
|
| |
Code in the If condition can be moved out to before the if.
Existing test updates are 99% whitespace.
|
|
|
|
|
| |
Locally I saw a 10% speedup on j2cl but reports of regressions have
arrived, so let's disable it for now pending investigation. The option added
here should make it easy to experiment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR helps with functions like this:
function foo(x) {
if (x) {
..
lots of work here
..
}
}
If "lots of work" is large enough, then we won't inline such a
function. However, we may end up calling into the function
only to get a false on that if and immediately exit. So it is useful
to partially inline this function, basically by creating a split
of it into a condition part that is inlineable
function foo$inlineable(x) {
if (x) {
foo$outlined();
}
}
and an outlined part that is not inlineable:
function foo$outlined(x) {
..
lots of work here
..
}
We can then inline the inlineable part. That means that a call
like
foo(param);
turns into
if (param) {
foo$outlined();
}
In other words, we end up replacing a call and then a check with
a check and then a call. Any time that the condition is false, this
will be a speedup.
The cost here is increased size, as we duplicate the condition
into the callsites. For that reason, only do this when heavily
optimizing for size.
This is a 10% speedup on j2cl. This helps two types of functions
there: Java class inits, which often look like "have I been
initialized before? if not, do all this work", and also assertion
methods which look like "if the input is null, throw an
exception".
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable it in -O3 and -Os and higher.
This helps very little on output from LLVM, but also it does not alter
compile times much anyhow. On code that has not been run through
an optimizing compiler already, this can help quite a lot, e.g., 15% of
code size on some wasm GC samples.
This will not normally help with speed, as optimizing VMs do such
things anyhow. However, this can help baseline compilers and
interpreters and so forth.
|
|
|