| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code used to remove functions it no longer thinks are needed. That is,
if it adds a legalized version of an import, it would remove the illegal
one which is no longer needed. To avoid removing an illegal import that
is still used it checked for ref.func appearances.
But this was bad in two ways:
We need to legalize the ref.funcs too. We can't call an illegal import
in any way, not using a direct call, indirect call, or call by reference of
a ref.func.
It's silly to remove unneeded functions here. We have a pass for that.
This removes the removal of functions, and adds proper updating of
ref.calls, which means to call the stub function that looks like the
original import, but that calls the legalized one and connects things
up properly, exactly the same way as other calls.
Also remove code that checked if we were in the stub/thunk and to
not replace the call there. That code is not needed: no one will ever
call the illegal import, so we do not need to be careful about
preserving such calls.
|
|
|
|
|
|
|
|
| |
The passive keyword has been removed from spec's text format, and now
any data segment that doesn't have an offset is considered as passive.
This PR remove that from both parser and the Print pass, plus all tests
that used that syntax.
Fixes #2339
|
|
|
|
|
|
|
|
|
|
|
|
| |
Log out an i64 as two i32s. That keeps the output consistent regardless of
whether we legalize.
Do not print a reference result. The printed value cannot be compared, as
funcref(10) (where 10 is the index of the function) is not guaranteed to be
the same after opts.
Trap when trying to call an export with a nondefaultable type (instead of
asserting when trying to create zeros for it).
|
|
|
|
|
|
|
|
|
| |
This is similar to the optimization of BrOn in #3719 and #3724. When the
type tells us the kind of input we have, we can tell at compile time what
result we'll get, like ref.is_func of something with type (ref func) will
always return 1, etc.
There are some code size and perf tradeoffs that should be looked into
more and are marked as TODOs.
|
|
|
|
|
|
| |
We missed that in effects.h, with the result that sets could look like
they had no side effects.
Fixes #3754
|
|
|
|
|
|
|
|
|
|
|
| |
In this case, there is a natural place to fix things up right after
removing a parameter (which is where a local gets added). Doing it
there avoids doing work on all functions unnecessarily.
Note that we could do something even simpler here than calling
the generic code: the parameter was not used, so the new local
is not used, and we could just change the type of the local or not
add it at all. Those would be slightly more code though, and add
complexity to the parameter removal method itself.
|
|
|
|
|
|
|
|
|
| |
This was noticed by samparker on LLVM:
https://reviews.llvm.org/D99171
This is apparently a pattern LLVM emits, and doing it there helps by 1-2%
on the real-world Bullet Physics codebase. Seems worthwhile doing here
as well.
|
|
|
|
|
|
|
| |
This is a partial revert of #3669, which removed the old implementation of
Type::getLeastUpperBound that did not correctly handle recursive types. The new
implementation in this PR uses a TypeBuilder to construct LUBs and for recursive
types, it returns a temporary HeapType that has not yet been fully constructed
to break what would otherwise be infinite recursions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several old passes like DeadArgumentElimination and DuplicateFunctionElimination
need to look at all ref.funcs, and they scanned functions for that, but that is not
enough as such an instruction might appear in a global initializer. To fix this, add a
walkModuleCode method.
walkModuleCode is useful when doing the pattern of creating a function-parallel
pass to scan functions quickly, but we also want to do the same scanning of code
at the module level. This allows doing so in a single line.
(It is also possible to just do walk() on the entire module, which will find all code,
but that is not function-parallel. Perhaps we should have a walkParallel() option
to simplify this further in a followup, and that would call walkModuleCode afterwards
etc.)
Also add some missing validation and comments in the validator about issues that
I noticed in relation to the new testcases here.
|
|
|
|
| |
If such a parameter is written to then we create a new local for each
such write, and must handle non-nullability of those new locals.
|
|
|
|
| |
Parameters can be non-nullable, and must stay so if they began as such.
By mistake we modified them with the vars.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements emscripten-core/emscripten#13744
Inlining functions with a single use allows us to remove the function afterward.
That looks highly beneficial, shrinking every single benchmark in emscripten's
benchmark suite, by an average of 2% on the macrobenchmarks and 3.5% on
all of them. Speed also improves, although mostly on the microbenchmarks so
that might be less realistic.
There may be a slight downside to startup time due to emitting larger functions,
but given the baseline compilers in VMs these days it seems worth it, as the
delay would be just to get to the upper tier. On the benchmark suite the risk
seems low.
See more details in the PR above.
|
|
|
|
|
| |
The pass gives a simple short name to each type, $type$N. This can be
useful in debugging, to avoid the autogenerated names which can be very
long.
|
|
|
|
|
|
|
|
| |
We must write them to a tuple with nullable types, then fix that up when
reading. This is similar to what we do in handleNonNullableLocals, except
that it operates on the entire tuple type, so it can't share that code.
This fixes a regression from #3710 that was harder to notice by the fuzzer
until now.
|
|
|
|
|
|
|
| |
This looks like a pre-existing issue to #3731, but
the testcase only fails on that PR for a reason I did not investigate in
depth, so it should land first.
Also fix the check for nondefaultability in the fuzzer.
|
|
|
|
|
|
| |
We can't disallow it, as its result is non-null which we can't spill to a local.
This may cause issues eventually in the combination of GC + flatten, but
I don't expect it to. If it does we may need to revisit.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Makes TypeBuilders growable, adds a `getTempHeapType` method, allows the
`getTemp*Type` methods to take arbitrary temporary or canonical HeapTypes rather
than just an index, and allows BasicHeapTypes to be assigned to TypeBuilder
slots. All of these changes are necessary for the upcoming re-implementation of
equirecursive LUB calculation.
Also adds a new utility to TypeBuilder for using `operator[]` as an intuitive
and readable wrapper around the `getTempHeapType` and `setHeapType` methods.
|
|
|
|
|
|
|
|
|
|
|
|
| |
#3719 optimized the case where the kind is what we want, like br_on_func of
a function. This handles the opposite case, where we know the kind is wrong, and
so the break is not taken.
This seems equally useful for "polymorphic" code that does a bunch of checks
and routes to different code for each case, as after inlining or other optimizations
we may see which paths are taken and which are not.
Also refactor the checking code to a shared location as RefIs/As will also want to
use it.
|
|
|
| |
That pass adds lots of new locals, and we need to handle non-nullable ones.
|
|
|
|
|
|
| |
This PR adds support for `ref.null t` as a valid element segment
item. The abbreviated format of `(elem ... func $f $g...)` is kept in
both printing and binary emitting if all items are `ref.func`s. Public
APIs aren't updated in this PR.
|
|
|
|
| |
The type may prove the value is not null, and may also show it to be
of the type we are casting to. In that case, we can simplify things.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Equirecursive type canonicalization
Use Hopcroft's DFA minimization algorithm to properly canonicalize and
deduplicate recursive types. Type canonicalization has two stages:
1. Shape canonicalization
- The top-level structure of HeapTypes is used to split the declared HeapTypes
into their initial partitions.
- Hopcroft's algorithm refines the partitions such that all pairs of
distinguishable types end up in different partitions.
- A fresh HeapTypeInfo is created for each final partition. Each new
HeapTypeInfo is linked to other new HeapTypeInfos to create a minimal type
definition graph that defines the same types as the original graph.
2. Global canonicalization
- Each new minimal HeapTypeInfo that does not have the same finite
shape as an existing globally canonical HeapTypeInfo is moved to the
global heap type store to become the new canonical HeapTypeInfo.
- Each temporary Type referenced by the newly canonical HeapTypeInfos is
replaced in-place with the equivalent canonical Type to avoid leaking
temporary Types into the global stores.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After this PR we still do not support non-nullable locals. But we no longer
turn all types into nullable upon load. In particular, we support non-nullable
types on function parameters and struct fields, etc. This should be enough to
experiment with optimizations in both binaryen and in VMs regarding non-
nullability (since we expect that optimizing VMs can do well inside functions
anyhow; it's non-nullability across calls and from data that the VM can't be
expected to think about).
Let is handled as before, by lowering it into gets and sets. In addition, we
turn non-nullable locals into nullable ones, and add a ref.as_non_null on
all their gets (to keep the type identical there). This is used not just for
loading code with a let but also is needed after inlining.
Most of the code changes here are removing FIXMEs for allowing
non-nullable types. But there is also code to handle the issues mentioned
above.
Most of the test updates are removing extra nulls that we added before
when we turned all types nullable. A few tests had actual issues, though,
and also some new tests are added to cover the code changes here.
|
|
|
|
|
| |
We have the if's type, and when replacing it with a select, can use that
type. This could be more efficient. It also avoids a current crash
after the removal of LUBs, but it's worth doing regardless of that.
|
|
|
|
|
|
| |
When we can skip function bodies, we still need to parse the start function
for the pthreads case, see details in the comments. This still gives us 99%
of the speedup as the start function is just 1 function and it's not that big,
so with this we return to full speed after the reversion in #3705
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm not entirely sure how LUB removal made this noticeable, as it seems
to be a pre-existing bug. However, somehow before #3669 it was not
noticable - perhaps the finalize code worked around it.
The bug is that RemoveUnusedBrs was moving code around and
finalizing the parent before the child. The correct pattern is always to
work from the children outwards, as otherwise the parent is trying to
finalize itself based on non-finalized children.
The fix is to just not finalize in the stealSlice method. The caller can
do it after finishing any other work it has. As part of this refactoring,
move stealSlice into the single pass that uses it; aside from that being
more orderly, this method is really not a general-purpose tool, it is
quite specific to what RemoveUnusedBrs does, and it might easily
be used incorrectly elsewhere.
|
|
|
|
|
|
|
|
|
|
|
| |
The pass was only aware of Break and Switch. Refactor it to use the
generic code, so that we can first handle Break, and then if anything
remains, note a problem was found. The same path can handle a Switch
which we handled before and also a BrOn etc.
git diff is not that useful after the refactoring sadly, but basically this just
moves the Break code and the Drop code, then adds the BranchUtils::operateOn
stuff after them (and we switch to a unified visitor so that we get called
for all expressions).
|
|
|
| |
Fixes #3664
|
|
|
| |
We were missing type names and the features section.
|
|
|
|
| |
Also add more spec tests, including one that verifies we validate
rtt.sub and on a global location as fixed by #3694
|
|
|
| |
Same as we already do for struct.set.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This change as automatically generated by:
$ ./scripts/test/generate_lld_tests.py
$ ./auto_update_tests.py --binaryen-bin=../binaryen-out/bin lld
The changes here are mostly due to:
- llvm now emits names for globals and segments
- emscripten now packs EM_ASM consts into a single contiguous segment
|
|
|
|
|
|
|
| |
(#3680)
When storing to an i8, we can ignore any higher bits, etc.
Adds a getByteSize utility to Field to make this convenient.
|
| |
|
|
|
|
|
|
|
|
| |
Since correct LUB calculation for recursive types is complicated, stop depending
on LUBs throughout the code base. This also fixes a validation bug in which the
validator required blocks to be typed with the LUB of all the branch types, when
in fact any upper bound should have been valid. In addition to fixing that bug,
this PR simplifies the code for break handling by not storing redundant
information about the arity of types.
|
|
|
|
|
|
|
| |
Since in principle an unreachable expression can be used in any position. An
exception to this rule is in OptimizeInstructions, which avoids replacing
concrete expressions with unreachable expressions so that it doesn't need to
refinalize any expressions. Notably, Type::getLeastUpperBound was already
treating unreachable as the bottom type.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Passive element segments do not belong to any table, so the link between
Table and elem needs to be weaker; i.e. an elem may have a table in case
of active segments, or simply be a collection of function references in
case of passive/declarative segments.
This PR takes Table::Segment out and turns it into a first class module
element just like tables and functions. It also implements early support
for parsing, printing, encoding and decoding passive/declarative elem
segments.
|
|
|
|
|
|
|
|
|
|
|
| |
Just as reads and writes to memory can interfere with each other, reads and
writes of GC objects can interfere with each other. This PR adds new `readsHeap`
and `writesHeap` fields to EffectAnalyzer to account for this interference. Note
that memory accesses can never alias with GC heap accesses, so they are
considered separately. Similarly, it would be possible to prove that different
GC heap accesses never interfere with each other based on the accessed types,
but that's left to future work.
Fixes #3655.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When writing a binary, we take the local indexes in the IR and turn
them into the format in the binary, which clumps them by type. When
writing the names section we should be aware of that ordering, but
we never were, as noticed in #3499
This fixes that by saving the mapping of locals when we are emitting
the name section, then using it when emitting the local names.
This also fixes the order of the types themselves as part of the
refactoring. We used to depend on the ordering of types to decide
which to emit first, but that isn't good for at least two reasons. First,
it hits #3648 - that order is not fully
defined for recursive types. Also, it's not good for code size - we've
ordered the locals in a way we think is best already (ReorderLocals pass).
This PR makes us pick an order of types based on that, as much as
possible, that is, when we see a type for the first time we append it to
a list whose order we use.
Test changes: Some are just because we use a different order than
before, as in atomics64. But some are actual fixes, e.g. in heap-types
where we now have (local $tv (ref null $vector)) which is indeed
right - v there is for vector, and likewise m for matrix etc. - we
just had wrong names before. Another example, we now have
(local $local_externref externref) whereas before the name was
funcref, and which was wrong... seems like the incorrectness was
more common on reference types and GC types, which is why this was
not noticed before.
Fixes #3499
Makes part of #3648 moot.
|
|
|
|
|
|
|
| |
This adds support for reading (elem declare func $foo .. in the text and
binary formats. We can simply ignore it: we don't need to represent it in
IR, rather we find what needs to be declared when writing. That part takes
a little more work, for which this adds a shared helper function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comparison implemented by TypeComparator was not previously antisymmetric
because it held that both A < B and B < A when A and B were structurally
identical but nominally distinct recursive types. This meant that it did not
satisfy the conditions of C++'s "Compare" requirement, which meant that std::set
did not operate correctly, as discovered in #3648. The PR fixes the problem by
having making A < B be false (and vice versa), making type comparisons properly
antisymmetric.
As a drive by, also switches to using std::stable_sort in collectHeapTypes to
make the order of the type section completely deterministic accross platforms.
Fixes #3648.
|
|
|
|
|
|
|
| |
together (#3647)
Names of structurally identical types end up "collapsed" together after the
types are canonicalized, but with this PR we can properly read content that
has structurally identical types with different names.
|
|
|
| |
catch_all should not have a pop.
|
| |
|
|
|
| |
This updates them to be correct in the current spec and prototype v3.
|