| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #3843.
The issue was that during LUB type building, Hopcroft's algorithm was only
running on the temporary HeapTypes in the TypeBuilder and not considering the
globally canonical HeapTypes that were reachable from the temporary HeapTypes.
That meant that temporary HeapTypes that referred to and were equirecursively
equivalent to the globally canonical types were not properly minimized and could
not be matched to the corresponding globally canonical HeapTypes.
The fix is to run Hopcroft's algorithm on the complete HeapType graph, not just
the root HeapTypes. Since there were already multiple implementations of type
graph traversal, this PR consolidates them into a generic type traversal
utility. Although this creates more boilerplate, it also reduces code
duplication and will be easier to maintain and reuse.
Now that Hopcroft's algorithm partitions can contain globally canonical
HeapTypes, this PR also updates the `translateToTypes` step of shape
canonicalization to reuse the globally canonical types unchanged, since they
must already be minimal. Without this change, `translateToTypes` could end up
incorrectly inserting temporary HeapTypes into the globally canonical type
graph. Unfortunately, this change complicates the interface presented by
`ShapeCanonicalizer` because it no longer owns the HeapTypeInfos backing all of
the minimized types. Fixing this is left as future work.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For example,
(if (result i32)
(local.get $x)
(return
(local.get $y)
)
(return
(local.get $z)
)
)
If we move the returns outside we'd become unreachable, but we should
not make such type changes in this pass (they are handled by DCE and
Vacuum).
(found by the fuzzer)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #3838 and current breakage on CI here.
IIUC the warning is that we define a copy assignment operator, but
did not define a copy constructor. Clang wants both to exist now. However,
we delete the copy assignment one, so perhaps we should find a way to
avoid a copy on construction too? That happens here:
../src/wasm/wasm-type.cpp:60:51: note: in implicit copy constructor for 'wasm::Tuple' first required here
TypeInfo(const Tuple& tuple) : kind(TupleKind), tuple(tuple) {}
^
But I don't see an easy way to remove the copy there, so this PR
defines the copy constructor as clang wants.
|
| |
|
|
|
|
|
|
|
| |
We tried to ignore unreachable code, but only checked the type of
the entire node. But an arm might be unreachable, and after moving
code around that requires more work to update the type. But such
cases are best left to DCE anyhow, so just check for any unreachability
and stop there.
|
|
|
|
| |
We could more carefully see when a local is not needed there, but atm
we always add one, and that doesn't work for something nondefaultable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Effects are fine in the moved code, if we are doing so on an if
(which runs just one arm anyhow).
Allow unreachable, which lets us hoist returns for example.
Allow none, which lets us hoist drop and call for example. For
this we also need to be careful with subtyping, as at least drop
is polymorphic, so the child types may not have an LUB (see
example in code).
Adds a small ShallowEffectAnalyzer child of EffectAnalyzer that
calls visit to just do a shallow analysis (instead of walk which
walks the children).
|
|
|
|
|
|
|
|
| |
GOT.mem or GOT.func import (#3831)
This prevents the DCE of used symbols in emscripten's `MAIN_MODULE=2`
use case which we are starting to use and recommend a lot more.
Part of the fix for https://github.com/emscripten-core/emscripten/issues/13786
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(select
(foo
(X)
)
(foo
(Y)
)
(condition)
)
=>
(foo
(select
(X)
(Y)
(condition)
)
)
To make this simpler, refactor optimizeTernary to be templated.
|
|
|
|
|
|
|
| |
When we change a local's type, as we do in that method when we
turn a non-nullable (invalid) local into a nullable (valid) one, we must
update tees as well as gets - their type must match the changed
local type, and we must cast them so that their users do not see
a change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of creating a class and doing a traversal, simply use
wasm-delegates-fields to get the children directly. This is simpler and
faster.
This requires a new way to override the behavior by ChildValueIterator.
I ended up using CRTP.
This stores pointers to the children instead of the children. This will
allow modifying the children using these classes (which is the reason
I started this refactoring - next PR will use it). This is also slightly
faster in theory as we avoid doing loads from memory to find the
children (which maybe we never get to doing if the iteration is stopped
early).
Not the goal here, but this speeds up roundtripping by 12%.
This is NFC as nothing uses the new ability to modify things yet.
|
|
|
|
|
|
|
|
|
|
|
| |
We used to print active element segments right after corresponding
tables, and passive segments came after those. We didn't print internal
segment names, and empty segments weren't being printed at all. This
meant that there was no way for instructions to refer to those table
segments after round tripping.
This will fix those issues by printing segments in the order they were
defined, including segment names when necessary and not omitting
empty segments anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
#3792 added support for module linking and (register command to
wasm-shell, but forgot about three problems:
- Splitting spec tests prevents linking test modules together.
- Registered modules may still be used in assertions or an invoke
- Modules may re-export imported objects
This PR appends transformed modules after binary checks to a spec.wast
file, plus assertion tests and register commands. Then runs wasm-shell
on the whole file. It also keeps both the module name and its registered
name available in wasm-shell for use in shell commands and linked
modules. Furthermore, it correctly finds the module where an object is
defined even if it is imported and re-exported several times.
The updated version of imports.wast spec test is enabled to verify the
fixes.
|
|
|
|
|
|
|
|
|
|
|
| |
- Support functions appearing more than once in the table. It turns out we
were assuming and asserting that functions would appear at most once, but we
weren't making use of that assumption in any way.
- Make TableSlotManager::getSlot take a function Name rather than a RefFunc
expression to avoid allocating and leaking unnecessary expressions.
- Add and use a Builder interface for building TableElementSegments to make
them more similar to other module-level items.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing restructuring code could turn a block+br_if into an if in
simple cases, but it had some TODOs that I noticed were helpful on
GC benchmarks.
One limitation was that we need to reorder the condition and the value,
(block
(br_if
(value)
(condition)
)
(...)
)
=>
(if
(condition)
(value)
(...)
)
The old code checked for side effects in the condition. But it is ok for it
to have side effects if they can be reordered with the value (for example,
if the value is a constant then it definitely does not care about side effects
in the condition).
The other missing TODO is to use a select when we can't use an if:
(block
(drop
(br_if
(value)
(condition)
)
)
(...)
)
=>
(select
(value)
(...)
(condition)
)
In this case we do not reorder the condition and the value, but we do
reorder the condition with the rest of the block.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(select
(i32.eqz (X))
(i32.const 0|1)
(Y)
)
=>
(i32.eqz
(select
(X)
(i32.const 1|0)
(Y)
)
)
This is beneficial as the eqz may be folded into something on the outside. I
see this pattern in real-world code, both a GC benchmark (which is why I
noticed it) and it shrinks code size by tiny amounts on the emscripten
benchmark suite as well.
|
|
|
|
| |
In both cases doing the ref.as_non_null last is beneficial as we have
optimizations that can remove it based on where it is consumed.
|
|
|
|
|
| |
* Note that ref.cast has a fallthrough value.
* Optimize ref.eq on identical inputs.
|
|
|
|
|
|
|
|
|
| |
This prevents used imports which also happen to have duplicate names and
therefore cannot be provided by wasm (JS is happen to fill these in with
polymorphic JS functions).
I noticed this when working on emscripten and directly hooking modules
together. I was seeing failures, but not in release builds (because
wasm-opt would mop these up in release builds).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a rewrite of the wasm-shell tool, with the goal of improved
compatibility with the reference interpreter and the spec test suite.
To facilitate that, module instances are provided with a list of linked
instances, and imported objects are looked up in the correct instance.
The new shell can:
- register and link modules using the (register ...) command.
- parse binary modules with the syntax (module binary ...).
- provide the "spectest" module defined in the reference interpreter
- assert instantiation traps with assert_trap
- better check linkability by looking up the linked instances in
- assert_unlinkable
It cannot call external function references that are not direct imports.
That would require bigger changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes precomputation on GC after #3803 was too optimistic.
The issue is subtle. Precompute will repeatedly evaluate expressions and
propagate their values, flowing them around, and it ignores side effects
when doing so. For example:
(block
..side effect..
(i32.const 1)
)
When we evaluate that we see there are side effects, but regardless of them
we know the value flowing out is 1. So we can propagate that value, if it is
assigned to a local and read elsewhere.
This is not valid for GC because struct.new and array.new have a "side
effect" that is noticeable in the result. Each time we call struct.new we get a
new struct with a new address, which ref.eq can distinguish. So when this
pass evaluates the same thing multiple times it will get a different result.
Also, we can't precompute a struct.get even if we know the struct, not unless
we know the reference has not escaped (where a call could modify it).
To avoid all that, do not precompute references, aside from the trivially safe ones
like nulls and function references (simple constants that are the same each time
we evaluate the expression emitting them).
precomputeExpression() had a minor bug which this fixes. It checked the type
of the expression to see if we can create a constant for it, but really it should
check the value - since (separate from this PR) we have no way to emit a
"constant" for a struct etc. Also that only matters if replaceExpression is true, that
is, if we are replacing with a constant; if we just want the value internally, we have
no limit on that.
Also add Literal support for comparing GC refs, which is used by ref.eq. Without
that tiny fix the tests here crash.
This adds a bunch of tests, many for corner cases that we don't handle (since
the PR makes us not propagate GC references). But they should be helpful
if/when we do, to avoid the mistakes in #3803
|
|
|
|
|
|
| |
Turns out just removing the mangling wasn't enough for
emscripten to support both before and after versions.
See https://github.com/WebAssembly/binaryen/pull/3785
|
|
|
| |
See https://github.com/emscripten-core/emscripten/issues/13893
|
|
|
| |
See https://github.com/emscripten-core/emscripten/pull/13847
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To reduce allocator contention, MixedArena transparently forwards allocations to a
thread-local arena by traversing a linked list of arenas, looking for the one
corresponding to the current thread. If the search reaches the end of the linked
list, it allocates a new arena for the current thread and atomically appends it
to the end of the list using a compare-and-swap operation.
The problem was that the previous code used `compare_exchange_weak`, which is
allowed to spuriously fail, i.e. it is allowed to behave as though the original
value is not the same as the expected value, even when it is. In this case, that
means that it might fail to append the new allocation to the list even if the
`next` pointer is still null, which results in a subsequent null pointer
dereference. The fix is to use `compare_exchange_strong`, which is not allowed
to spuriously fail in this way.
Reported in #3806.
|
|
|
|
|
| |
Inlined parameters become locals, and rtts cannot be handled as locals, unlike
non-nullable values which we can at least fix up. So do not inline functions with
rtt params.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The precompute pass ignored all reference types, but that was overly
pessimistic: we can precompute some of them, namely a null and a
reference to a function are fully precomputable, etc.
To allow that to work, add missing integration in getFallthrough as well.
With this, we can precompute quite a lot of field accesses in the existing
-Oz testcase, as can be seen from the output. That testcase runs
--fuzz-exec so it prints out all those logged values, proving they have
not changed.
|
|
|
|
|
|
|
|
|
| |
Host limitations are arbitrary and can be modified by optimizations, so
ignore them. For example, if the optimizer removes allocations then a
host limit on an allocation error may vanish. Or, an optimization that
removes recursion and replaces it with a loop may avoid a host limit
on call depth (that is not done currently, but might some day).
This removes a class of annoying false positives in the fuzzer.
|
|
|
|
|
| |
Previously we just used unreachable. This also tries nop when it is possible,
and sometimes that is better (if the code is called, a nop may be less intrusive
of a change).
|
|
|
|
|
|
| |
ref.as_non_null is not needed if the value flows into a place that traps
on null anyhow. We replace a trap on one instruction with a trap on
another, but we allow such things (and even changing trap types, which
does not happen here).
|
|
|
|
|
|
|
|
|
| |
Renames the SIMD instructions
* LoadExtSVec8x8ToVecI16x8 -> Load8x8SVec128
* LoadExtUVec8x8ToVecI16x8 -> Load8x8UVec128
* LoadExtSVec16x4ToVecI32x4 -> Load16x4SVec128
* LoadExtUVec16x4ToVecI32x4 -> Load16x4UVec128
* LoadExtSVec32x2ToVecI64x2 -> Load32x2SVec128
* LoadExtUVec32x2ToVecI64x2 -> Load32x2UVec128
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Updates binary constants of SIMD instructions to match new opcodes:
* I16x8LoadExtSVec8x8 -> V128Load8x8S
* I16x8LoadExtUVec8x8 -> V128Load8x8U
* I32x4LoadExtSVec16x4 -> V128Load16x4S
* I32x4LoadExtUVec16x4 -> V128Load16x4U
* I64x2LoadExtSVec32x2 -> V128Load32x2S
* I64x2LoadExtUVec32x2 -> V128Load32x2U
* V8x16LoadSplat -> V128Load8Splat
* V16x8LoadSplat -> V128Load16Splat
* V32x4LoadSplat -> V128Load32Splat
* V64x2LoadSplat -> V128Load64Splat
* V8x16Shuffle -> I8x16Shuffle
* V8x16Swizzle -> I8x16Swizzle
* V128AndNot -> V128Andnot
* F32x4DemoteZeroF64x2 -> F32x4DemoteF64x2Zero
* I8x16NarrowSI16x8 -> I8x16NarrowI16x8S
* I8x16NarrowUI16x8 -> I8x16NarrowI16x8U
* I16x8ExtAddPairWiseSI8x16 -> I16x8ExtaddPairwiseI8x16S
* I16x8ExtAddPairWiseUI8x16 -> I16x8ExtaddPairwiseI8x16U
* I32x4ExtAddPairWiseSI16x8 -> I32x4ExtaddPairwiseI16x8S
* I32x4ExtAddPairWiseUI16x8 -> I32x4ExtaddPairwiseI16x8U
* I16x8Q15MulrSatS -> I16x8Q15mulrSatS
* I16x8NarrowSI32x4 -> I16x8NarrowI32x4S
* I16x8NarrowUI32x4 -> I16x8NarrowI32x4U
* I16x8ExtendLowSI8x16 -> I16x8ExtendLowI8x16S
* I16x8ExtendHighSI8x16 -> I16x8ExtendHighI8x16S
* I16x8ExtendLowUI8x16 -> I16x8ExtendLowI8x16U
* I16x8ExtendHighUI8x16 -> I16x8ExtendHighI8x16U
* I16x8ExtMulLowSI8x16 -> I16x8ExtmulLowI8x16S
* I16x8ExtMulHighSI8x16 -> I16x8ExtmulHighI8x16S
* I16x8ExtMulLowUI8x16 -> I16x8ExtmulLowI8x16U
* I16x8ExtMulHighUI8x16 -> I16x8ExtmulHighI8x16U
* I32x4ExtendLowSI16x8 -> I32x4ExtendLowI16x8S
* I32x4ExtendHighSI16x8 -> I32x4ExtendHighI16x8S
* I32x4ExtendLowUI16x8 -> I32x4ExtendLowI16x8U
* I32x4ExtendHighUI16x8 -> I32x4ExtendHighI16x8U
* I32x4DotSVecI16x8 -> I32x4DotI16x8S
* I32x4ExtMulLowSI16x8 -> I32x4ExtmulLowI16x8S
* I32x4ExtMulHighSI16x8 -> I32x4ExtmulHighI16x8S
* I32x4ExtMulLowUI16x8 -> I32x4ExtmulLowI16x8U
* I32x4ExtMulHighUI16x8 -> I32x4ExtmulHighI16x8U
* I64x2ExtendLowSI32x4 -> I64x2ExtendLowI32x4S
* I64x2ExtendHighSI32x4 -> I64x2ExtendHighI32x4S
* I64x2ExtendLowUI32x4 -> I64x2ExtendLowI32x4U
* I64x2ExtendHighUI32x4 -> I64x2ExtendHighI32x4U
* I64x2ExtMulLowSI32x4 -> I64x2ExtmulLowI32x4S
* I64x2ExtMulHighSI32x4 -> I64x2ExtmulHighI32x4S
* I64x2ExtMulLowUI32x4 -> I64x2ExtmulLowI32x4U
* I64x2ExtMulHighUI32x4 -> I64x2ExtmulHighI32x4U
* F32x4PMin -> F32x4Pmin
* F32x4PMax -> F32x4Pmax
* F64x2PMin -> F64x2Pmin
* F64x2PMax -> F64x2Pmax
* I32x4TruncSatSF32x4 -> I32x4TruncSatF32x4S
* I32x4TruncSatUF32x4 -> I32x4TruncSatF32x4U
* F32x4ConvertSI32x4 -> F32x4ConvertI32x4S
* F32x4ConvertUI32x4 -> F32x4ConvertI32x4U
* I32x4TruncSatZeroSF64x2 -> I32x4TruncSatF64x2SZero
* I32x4TruncSatZeroUF64x2 -> I32x4TruncSatF64x2UZero
* F64x2ConvertLowSI32x4 -> F64x2ConvertLowI32x4S
* F64x2ConvertLowUI32x4 -> F64x2ConvertLowI32x4U
|
|
|
|
|
|
|
|
|
| |
Renames the SIMD instructions
* LoadSplatVec8x16 -> Load8SplatVec128
* LoadSplatVec16x8 -> Load16SplatVec128
* LoadSplatVec32x4 -> Load32SplatVec128
* LoadSplatVec64x2 -> Load64SplatVec128
* Load32Zero -> Load32ZeroVec128
* Load64Zero -> Load64ZeroVec128
|
| |
|
|
|
|
| |
I'm not sure what `stack$init` is but I don't think its been
used for many years.
|
|
|
|
|
|
|
|
| |
the builder (#3790)
The builder can receive a HeapType so that callers don't need to set non-nullability
themselves.
Not NFC as some of the callers were in fact still making it nullable.
|
|
|
|
| |
non-funcref table (#3787)
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* Load8LaneVec128 (was LoadLaneVec8x16)
* Load16LaneVec128 (was LoadLaneVec16x8)
* Load32LaneVec128 (was LoadLaneVec32x4)
* Load64LaneVec128 (was LoadLaneVec64x2)
* Store8LaneVec128 (was StoreLaneVec8x16)
* Store16LaneVec128 (was StoreLaneVec16x8)
* Store32LaneVec128 (was StoreLaneVec32x4)
* Store64LaneVec128 (was StoreLaneVec64x2)
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* Load32Zero
* Load64Zero
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* Q15MulrSatSVecI16x8
* ExtMulLowSVecI16x8
* ExtMulHighSVecI16x8
* ExtMulLowUVecI16x8
* ExtMulHighUVecI16x8
* ExtMulLowSVecI32x4
* ExtMulHighSVecI32x4
* ExtMulLowUVecI32x4
* ExtMulHighUVecI32x4
* ExtMulLowSVecI64x2
* ExtMulHighSVecI64x2
* ExtMulLowUVecI64x2
* ExtMulHighUVecI64x2
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* ConvertLowSVecI32x4ToVecF64x2
* ConvertLowUVecI32x4ToVecF64x2
* TruncSatZeroSVecF64x2ToVecI32x4
* TruncSatZeroUVecF64x2ToVecI32x4
* DemoteZeroVecF64x2ToVecF32x4
* PromoteLowVecF32x4ToVecF64x2
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* ExtAddPairwiseSVecI8x16ToI16x8
* ExtAddPairwiseUVecI8x16ToI16x8
* ExtAddPairwiseSVecI16x8ToI32x4
* ExtAddPairwiseUVecI16x8ToI32x4
|
|
|
|
|
| |
See https://webassembly.github.io/spec/core/appendix/custom.html
"Each subsection may occur at most once, and in order of increasing id."
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* ExtendLowSVecI32x4ToVecI64x2
* ExtendHighSVecI32x4ToVecI64x2
* ExtendLowUVecI32x4ToVecI64x2
* ExtendHighUVecI32x4ToVecI64x2
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* PopcntVecI8x16
* AbsVecI64x2
* AllTrueVecI64x2
* BitmaskVecI64x2
* EqVecI64x2
* NeVecI64x2
* LtSVecI64x2
* GtSVecI64x2
* LeSVecI64x2
* GeSVecI64x2
|
|
|
|
|
|
|
|
|
|
| |
(#3559)
The spec does not mention traps here, but this is like a JS VM trapping on
OOM - a runtime limitation is reached.
As these are not specced traps, I did not add them to effects.h. Note how
as a result the optimizer happily optimizes into a nop an unused allocation of an
array of size unsigned(-1), which is the behavior we want.
|
| |
|