| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
| |
If we run a pass that removes DWARF followed by one that could destroy it, then
there is no possible problem - there is nothing left to destroy. We can run the later
pass with no issues (and no warnings).
Also add an assertion on running a pass runner only once. That has always been
the assumption, and now that we track whether the added passes remove debug
info, we need to check it.
Fixes emscripten-core/emscripten#14161
|
|
|
|
|
|
| |
Move the InsertOrderedSet and InsertOrderedMap implementations out of
Relooper.h and into a new insert_ordered.h so they can be used more widely. Only
changes the implementation code to use unordered_maps and
`WASM_UNREACHABLE` instead of `abort`.
|
|
|
|
| |
Affects `printMajor` and `printMedium`. There is no usage of this optional
argument in the source code.
|
|
|
| |
Via the C API.
|
|
|
|
|
|
|
|
|
|
| |
wasm-as supports --symbolmap=FOO as an argument. We got a request to
support the same in wasm-opt. wasm-opt does have --print-function-map which
does the same, but as a pass. To unify them, use the new pass arg sugar from
#3882 which allows us to add a --symbolmap pass whose argument can be
set as --symbolmap=FOO. That perfectly matches the wasm-as notation.
For now, keep the old --print-function-map notation as well, to not break
emscripten. After we remove it there we can remove it here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have --pass-arg that allows sending an argument to a pass, like this:
wasm-opt --do-stuff --pass-arg=do-stuff@FUNCTION_NAME
With this PR that is equivalent to this:
wasm-opt --do-stuff=FUNCTION_NAME
That is,one can just give an argument to a pass on the commandline.
This fixes the Optional mode in command-line.h/cpp. That was not
actually used anywhere before this PR.
Also rename --extract-function's pass argument to match it. That is, the usage
used to be
wasm-opt --extract-function --pass-arg=extract@FUNCTION_NAME
Note how the pass name differed from the pass-arg name. This changes it to
match. This is a breaking change, but I doubt this is used enough to justify
any deprecation / backwards compatibility effort, and any usage is almost
certainly manual, and with PR writing it manually becomes easier as one
can do
wasm-opt --extract-function=FUNCTION_NAME
The existing test for that is kept (&renamed), and a new test added to test the
new notation.
This is a step towards unifying the symbol map functionality between
wasm-as and wasm-opt (later PRs will turn the symbol mapping pass into
a pass that receives an argument).
|
|
|
|
|
| |
Without this fix we can segfault, as it has no body.
Fixes #3879
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we branch to a block, and there are no other branches or a final
value on the block either, then there is no mixing, and we may be able
to optimize the allocation. Before this PR, all branches stopped us.
To do this, add some helpers in BranchUtils.
The main flow logic in Heap2Local used to stop when we reached a
child for the second time. With branches, however, a child can flow both to
its immediate parent, and to branch targets, and so the proper thing to look
at is when we reach a parent for the second time (which would definitely
indicate mixing).
Tests are added for the new functionality. Note that some existing tests
already covered some things we should not optimize, and so no tests
were needed for them. The existing ones are: $get-through-block,
$branch-to-block.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we allocate some GC data, and do not let the reference escape, then we can
replace the allocation with locals, one local for each field in the allocation
basically. This avoids the allocation, and also allows us to optimize the locals
further.
On the Dart DeltaBlue benchmark, this is a 24% speedup (making it faster than
the JS version, incidentially), and also a 6% reduction in code size.
The tests are not the best way to show what this does, as the pass assumes
other passes will clean up after. Here is an example to clarify. First, in pseudocode:
ref = new Int(42)
do {
ref.set(ref.get() + 1)
} while (import(ref.get())
That is, we allocate an int on the heap and use it as a counter. Unnecessarily,
as it could be a normal int on the stack.
Wat:
(module
;; A boxed integer: an entire struct just to hold an int.
(type $boxed-int (struct (field (mut i32))))
(import "env" "import" (func $import (param i32) (result i32)))
(func "example"
(local $ref (ref null $boxed-int))
;; Allocate a boxed integer of 42 and save the reference to it.
(local.set $ref
(struct.new_with_rtt $boxed-int
(i32.const 42)
(rtt.canon $boxed-int)
)
)
;; Increment the integer in a loop, looking for some condition.
(loop $loop
(struct.set $boxed-int 0
(local.get $ref)
(i32.add
(struct.get $boxed-int 0
(local.get $ref)
)
(i32.const 1)
)
)
(br_if $loop
(call $import
(struct.get $boxed-int 0
(local.get $ref)
)
)
)
)
)
)
Before this pass, the optimizer could do essentially nothing with this.
Even with this pass, running -O1 has no effect, as the pass is only
used in -O2+. However, running --heap2local -O1 leads to this:
(func $0
(local $0 i32)
(local.set $0
(i32.const 42)
)
(loop $loop
(br_if $loop
(call $import
(local.tee $0
(i32.add
(local.get $0)
(i32.const 1)
)
)
)
)
)
)
All the GC heap operations have been removed, and we just
have a plain int now, allowing a bunch of other opts to run. That
output is basically the optimal code, I think.
|
|
|
|
|
|
| |
If we can't emit something, and instead emit a replacement for it (as is
the case for a StructSet with an unreachable RTT, so we have no known
heap type for it), add a comment that mentions it is a replacement. This
might avoid confusion while debugging.
|
|
|
|
|
|
|
|
|
| |
Instead, run RemoveUnusedModuleElements, which does that sort of thing. That
is, this pass just "extracts" the function by turning all others into imports, and then
they should almost all be removable via RemoveUnusedModuleElements, depending
on whether they are used in the table or not, whether the extracted function calls
them, etc.
Without this, we would error if a function was in the table, and so this fixes #3876
|
|
|
|
|
|
| |
Also fix printing of unreachable StructSets, which must handle the case
of an unreachable reference, which means we do not know the RTT,
and so we must print a replacement for the StructSet somehow. Emit a
block with drops, fixing the old behavior which was missing the drops.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Precompute not only computes values, but looks at the fallthrough,
(local.set 0
(block
..stuff we can ignore..
;; the fallthrough we care about - if a value is set to local 0, it is this
(i32.const 10)
)
)
Normally that is fine, but the fuzzer found a case where it is not: RefCast may
return a different type than the fallthrough, even an incompatible type if we
try to do something bad like cast a function to a struct. As we may then
propagate the value to a place that expects the proper type, this can cause an
error.
To fix this, check if the precomputed value is a proper subtype. If it is not,
then do not look through into the fallthrough, but compute the entire thing.
(In the case of a bad RefCast of a func to a struct, it would then indicate a
trap happens, and we would not precompute the value.)
|
|
|
|
|
|
| |
The method had TODOs which it halted on. But we should not halt the
entire program, as this is a best-effort attempt to replace a node with
something simpler of the same type (we call it when we know the value
is not actually used).
|
|
|
|
|
|
|
|
|
|
| |
The logic there would construct the cast value separately for functions and data
(as we must), and then in an attempt to share code, would then check if the
cast succeed or not (and if not, do nothing with the cast value).
But this was wrong, as in some weird casts (like a struct to a function) we
cannot construct a valid cast value, and we error there. Instead, check if the
cast works first, once we know enough to do so, and only then construct the
cast value if so.
|
|
|
|
|
|
| |
We truncated and extended packed values in get and set, but
not during initialization.
Found by the fuzzer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Binaryen allows optimizing functions in function-parallel passes while the
module is still being built, that is, while not all the other functions have
even been added to the module yet. Since the removal of asm2wasm that
has not been heavily tested, but the fuzzer found a closely related bug:
in passes like inlining-optimizing, that inline and then optimize the
functions we inlined into, the mechanism for optimizing only the relevant
functions is to create a module with only some of them. (We only want to
optimize the relevant ones, that we inlined into, because this happens
after the main optimization pipeline - we don't want to re-optimize all the
functions if we just inlined into one of them.)
The specific bug here is that ref.cast of a funcref looked up the target
function on the module (in order to get its signature, to see if the cast
has the right RTT for it). The fix is to return a nonconstant flow in that
case, as it is something we cannot precompute. (This does mean we
may miss some optimization opportunities, but as in the case of where
we optimize functions before the module is fully built up, we do still
get 99% of function-local optimizations that way, and a subsequent
round of full optimizations can be done later if necessary.)
|
|
|
| |
Becomes BinaryenElementSegmentIsPassive
|
|
|
|
|
|
|
|
|
|
| |
Add operateOnScopeNameUsesAndSentValues which is like the
existing utilities, but provides the child expression that is sent as a
value on the branch.
Also provide a BranchTargets utility that maps target names to the
control flow structure for them.
Both will be used and tested in escape analysis.
|
|
|
|
|
|
| |
A new getImmediateFallthrough is called in the loop.
Aside from this being more efficient than recursion, the new method will
be used in escape analysis.
|
|
|
|
|
|
|
| |
Some passes need setInfluences but not getInfluences, but were
computing them nonetheless.
This makes e.g. MergeLocals 12% faster. It will also help use LocalGraph
in new passes with less worries about speed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix two potential sources of data races identified with the help of thread
sanitizer.
First, keep a lock on the global HeapType store as long as it can reach
temporary types to ensure that no other threads observe the temporary types, for
example if another thread concurrently constructs a new HeapType with the same
shape as one being canonicalized here. This cannot happen with Types because
they are hashed in the global store by pointer identity, which has not yet
escaped the builder, rather than shape.
Second, in the shape canonicalizer, do not replace children of the new, minimal
HeapTypeInfos if they are already canonical. Even though these writes are always
no-ops, they still race because they are visible to other threads via canonical
Types.
|
|
|
|
|
|
|
|
|
|
|
| |
Add new public `getHeapTypeChildren` methods to Type and HeapType, implemented
in using the standard machinery from #3844, and use them to simplify
`ModuleUtils::collectHeapTypes`.
Now that the type traversal code in wasm-type.cpp is not just used in
canonicalization, move it to a more appropriate place in the file. Also, since
the only users of `HeapTypePathWalker` were using it to visit top-level children
only, replace that with a more specialized `HeapTypeChildWalker` to reduce code
duplication.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This compares two local.gets and checks whether we are sure they are
equivalent, that is, they contain the same value.
This does not solve the general problem, but uses the existing info to get
a positive answer for the common case where two gets only receive values
by a single set, like
(local.set $x ..)
(a use.. (local.get $x))
(another use.. (local.get $x))
If they only receive values from the same single set, then we know it must
dominate them. The only risk is that the set is "in between" the gets, that is,
that the set occurs after one get and before the other. That can happen in
a loop in theory,
(loop $loop
(use (local.get $x))
(local.set $x ..some new value each iteration..)
(use (local.get $x))
(br_if $loop ..)
)
Both of those gets receive a value from the set, and they may be different
values, from different loop iterations. But as mentioned in the source code,
this is not a problem since wasm always has a zero-initialization value, and
so the first local.get in that loop would have another set from which it can
receive a value, the function entry. (The only way to avoid that is for this
entire code to be unreachable, in which case nothing matters.)
This will be useful in dead store elimination, which has to use this to reason
about references and pointers in order to be able to do anything useful with
GC and memory.
|
|
|
|
|
|
| |
This is similar to the limit in TypeNamePrinter in Print.cpp. This limit
affects the printed type when debugging with std::cout << type etc.,
which just prints the structure and not the name.
|
|
|
|
|
|
|
|
| |
Add clear().
Add UniqueNonrepeatingDeferredQueue which also has the property
that it never repeats values in the output.
Also add unit tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #3843.
The issue was that during LUB type building, Hopcroft's algorithm was only
running on the temporary HeapTypes in the TypeBuilder and not considering the
globally canonical HeapTypes that were reachable from the temporary HeapTypes.
That meant that temporary HeapTypes that referred to and were equirecursively
equivalent to the globally canonical types were not properly minimized and could
not be matched to the corresponding globally canonical HeapTypes.
The fix is to run Hopcroft's algorithm on the complete HeapType graph, not just
the root HeapTypes. Since there were already multiple implementations of type
graph traversal, this PR consolidates them into a generic type traversal
utility. Although this creates more boilerplate, it also reduces code
duplication and will be easier to maintain and reuse.
Now that Hopcroft's algorithm partitions can contain globally canonical
HeapTypes, this PR also updates the `translateToTypes` step of shape
canonicalization to reuse the globally canonical types unchanged, since they
must already be minimal. Without this change, `translateToTypes` could end up
incorrectly inserting temporary HeapTypes into the globally canonical type
graph. Unfortunately, this change complicates the interface presented by
`ShapeCanonicalizer` because it no longer owns the HeapTypeInfos backing all of
the minimized types. Fixing this is left as future work.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For example,
(if (result i32)
(local.get $x)
(return
(local.get $y)
)
(return
(local.get $z)
)
)
If we move the returns outside we'd become unreachable, but we should
not make such type changes in this pass (they are handled by DCE and
Vacuum).
(found by the fuzzer)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #3838 and current breakage on CI here.
IIUC the warning is that we define a copy assignment operator, but
did not define a copy constructor. Clang wants both to exist now. However,
we delete the copy assignment one, so perhaps we should find a way to
avoid a copy on construction too? That happens here:
../src/wasm/wasm-type.cpp:60:51: note: in implicit copy constructor for 'wasm::Tuple' first required here
TypeInfo(const Tuple& tuple) : kind(TupleKind), tuple(tuple) {}
^
But I don't see an easy way to remove the copy there, so this PR
defines the copy constructor as clang wants.
|
| |
|
|
|
|
|
|
|
| |
We tried to ignore unreachable code, but only checked the type of
the entire node. But an arm might be unreachable, and after moving
code around that requires more work to update the type. But such
cases are best left to DCE anyhow, so just check for any unreachability
and stop there.
|
|
|
|
| |
We could more carefully see when a local is not needed there, but atm
we always add one, and that doesn't work for something nondefaultable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Effects are fine in the moved code, if we are doing so on an if
(which runs just one arm anyhow).
Allow unreachable, which lets us hoist returns for example.
Allow none, which lets us hoist drop and call for example. For
this we also need to be careful with subtyping, as at least drop
is polymorphic, so the child types may not have an LUB (see
example in code).
Adds a small ShallowEffectAnalyzer child of EffectAnalyzer that
calls visit to just do a shallow analysis (instead of walk which
walks the children).
|
|
|
|
|
|
|
|
| |
GOT.mem or GOT.func import (#3831)
This prevents the DCE of used symbols in emscripten's `MAIN_MODULE=2`
use case which we are starting to use and recommend a lot more.
Part of the fix for https://github.com/emscripten-core/emscripten/issues/13786
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(select
(foo
(X)
)
(foo
(Y)
)
(condition)
)
=>
(foo
(select
(X)
(Y)
(condition)
)
)
To make this simpler, refactor optimizeTernary to be templated.
|
|
|
|
|
|
|
| |
When we change a local's type, as we do in that method when we
turn a non-nullable (invalid) local into a nullable (valid) one, we must
update tees as well as gets - their type must match the changed
local type, and we must cast them so that their users do not see
a change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of creating a class and doing a traversal, simply use
wasm-delegates-fields to get the children directly. This is simpler and
faster.
This requires a new way to override the behavior by ChildValueIterator.
I ended up using CRTP.
This stores pointers to the children instead of the children. This will
allow modifying the children using these classes (which is the reason
I started this refactoring - next PR will use it). This is also slightly
faster in theory as we avoid doing loads from memory to find the
children (which maybe we never get to doing if the iteration is stopped
early).
Not the goal here, but this speeds up roundtripping by 12%.
This is NFC as nothing uses the new ability to modify things yet.
|
|
|
|
|
|
|
|
|
|
|
| |
We used to print active element segments right after corresponding
tables, and passive segments came after those. We didn't print internal
segment names, and empty segments weren't being printed at all. This
meant that there was no way for instructions to refer to those table
segments after round tripping.
This will fix those issues by printing segments in the order they were
defined, including segment names when necessary and not omitting
empty segments anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
#3792 added support for module linking and (register command to
wasm-shell, but forgot about three problems:
- Splitting spec tests prevents linking test modules together.
- Registered modules may still be used in assertions or an invoke
- Modules may re-export imported objects
This PR appends transformed modules after binary checks to a spec.wast
file, plus assertion tests and register commands. Then runs wasm-shell
on the whole file. It also keeps both the module name and its registered
name available in wasm-shell for use in shell commands and linked
modules. Furthermore, it correctly finds the module where an object is
defined even if it is imported and re-exported several times.
The updated version of imports.wast spec test is enabled to verify the
fixes.
|
|
|
|
|
|
|
|
|
|
|
| |
- Support functions appearing more than once in the table. It turns out we
were assuming and asserting that functions would appear at most once, but we
weren't making use of that assumption in any way.
- Make TableSlotManager::getSlot take a function Name rather than a RefFunc
expression to avoid allocating and leaking unnecessary expressions.
- Add and use a Builder interface for building TableElementSegments to make
them more similar to other module-level items.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing restructuring code could turn a block+br_if into an if in
simple cases, but it had some TODOs that I noticed were helpful on
GC benchmarks.
One limitation was that we need to reorder the condition and the value,
(block
(br_if
(value)
(condition)
)
(...)
)
=>
(if
(condition)
(value)
(...)
)
The old code checked for side effects in the condition. But it is ok for it
to have side effects if they can be reordered with the value (for example,
if the value is a constant then it definitely does not care about side effects
in the condition).
The other missing TODO is to use a select when we can't use an if:
(block
(drop
(br_if
(value)
(condition)
)
)
(...)
)
=>
(select
(value)
(...)
(condition)
)
In this case we do not reorder the condition and the value, but we do
reorder the condition with the rest of the block.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(select
(i32.eqz (X))
(i32.const 0|1)
(Y)
)
=>
(i32.eqz
(select
(X)
(i32.const 1|0)
(Y)
)
)
This is beneficial as the eqz may be folded into something on the outside. I
see this pattern in real-world code, both a GC benchmark (which is why I
noticed it) and it shrinks code size by tiny amounts on the emscripten
benchmark suite as well.
|
|
|
|
| |
In both cases doing the ref.as_non_null last is beneficial as we have
optimizations that can remove it based on where it is consumed.
|
|
|
|
|
| |
* Note that ref.cast has a fallthrough value.
* Optimize ref.eq on identical inputs.
|
|
|
|
|
|
|
|
|
| |
This prevents used imports which also happen to have duplicate names and
therefore cannot be provided by wasm (JS is happen to fill these in with
polymorphic JS functions).
I noticed this when working on emscripten and directly hooking modules
together. I was seeing failures, but not in release builds (because
wasm-opt would mop these up in release builds).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a rewrite of the wasm-shell tool, with the goal of improved
compatibility with the reference interpreter and the spec test suite.
To facilitate that, module instances are provided with a list of linked
instances, and imported objects are looked up in the correct instance.
The new shell can:
- register and link modules using the (register ...) command.
- parse binary modules with the syntax (module binary ...).
- provide the "spectest" module defined in the reference interpreter
- assert instantiation traps with assert_trap
- better check linkability by looking up the linked instances in
- assert_unlinkable
It cannot call external function references that are not direct imports.
That would require bigger changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes precomputation on GC after #3803 was too optimistic.
The issue is subtle. Precompute will repeatedly evaluate expressions and
propagate their values, flowing them around, and it ignores side effects
when doing so. For example:
(block
..side effect..
(i32.const 1)
)
When we evaluate that we see there are side effects, but regardless of them
we know the value flowing out is 1. So we can propagate that value, if it is
assigned to a local and read elsewhere.
This is not valid for GC because struct.new and array.new have a "side
effect" that is noticeable in the result. Each time we call struct.new we get a
new struct with a new address, which ref.eq can distinguish. So when this
pass evaluates the same thing multiple times it will get a different result.
Also, we can't precompute a struct.get even if we know the struct, not unless
we know the reference has not escaped (where a call could modify it).
To avoid all that, do not precompute references, aside from the trivially safe ones
like nulls and function references (simple constants that are the same each time
we evaluate the expression emitting them).
precomputeExpression() had a minor bug which this fixes. It checked the type
of the expression to see if we can create a constant for it, but really it should
check the value - since (separate from this PR) we have no way to emit a
"constant" for a struct etc. Also that only matters if replaceExpression is true, that
is, if we are replacing with a constant; if we just want the value internally, we have
no limit on that.
Also add Literal support for comparing GC refs, which is used by ref.eq. Without
that tiny fix the tests here crash.
This adds a bunch of tests, many for corner cases that we don't handle (since
the PR makes us not propagate GC references). But they should be helpful
if/when we do, to avoid the mistakes in #3803
|