| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
Turns out just removing the mangling wasn't enough for
emscripten to support both before and after versions.
See https://github.com/WebAssembly/binaryen/pull/3785
|
|
|
| |
See https://github.com/emscripten-core/emscripten/issues/13893
|
|
|
| |
See https://github.com/emscripten-core/emscripten/pull/13847
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To reduce allocator contention, MixedArena transparently forwards allocations to a
thread-local arena by traversing a linked list of arenas, looking for the one
corresponding to the current thread. If the search reaches the end of the linked
list, it allocates a new arena for the current thread and atomically appends it
to the end of the list using a compare-and-swap operation.
The problem was that the previous code used `compare_exchange_weak`, which is
allowed to spuriously fail, i.e. it is allowed to behave as though the original
value is not the same as the expected value, even when it is. In this case, that
means that it might fail to append the new allocation to the list even if the
`next` pointer is still null, which results in a subsequent null pointer
dereference. The fix is to use `compare_exchange_strong`, which is not allowed
to spuriously fail in this way.
Reported in #3806.
|
|
|
|
|
| |
Inlined parameters become locals, and rtts cannot be handled as locals, unlike
non-nullable values which we can at least fix up. So do not inline functions with
rtt params.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The precompute pass ignored all reference types, but that was overly
pessimistic: we can precompute some of them, namely a null and a
reference to a function are fully precomputable, etc.
To allow that to work, add missing integration in getFallthrough as well.
With this, we can precompute quite a lot of field accesses in the existing
-Oz testcase, as can be seen from the output. That testcase runs
--fuzz-exec so it prints out all those logged values, proving they have
not changed.
|
|
|
|
|
|
|
|
|
| |
Host limitations are arbitrary and can be modified by optimizations, so
ignore them. For example, if the optimizer removes allocations then a
host limit on an allocation error may vanish. Or, an optimization that
removes recursion and replaces it with a loop may avoid a host limit
on call depth (that is not done currently, but might some day).
This removes a class of annoying false positives in the fuzzer.
|
|
|
|
|
| |
Previously we just used unreachable. This also tries nop when it is possible,
and sometimes that is better (if the code is called, a nop may be less intrusive
of a change).
|
|
|
|
|
|
| |
ref.as_non_null is not needed if the value flows into a place that traps
on null anyhow. We replace a trap on one instruction with a trap on
another, but we allow such things (and even changing trap types, which
does not happen here).
|
|
|
|
|
|
|
|
|
| |
Renames the SIMD instructions
* LoadExtSVec8x8ToVecI16x8 -> Load8x8SVec128
* LoadExtUVec8x8ToVecI16x8 -> Load8x8UVec128
* LoadExtSVec16x4ToVecI32x4 -> Load16x4SVec128
* LoadExtUVec16x4ToVecI32x4 -> Load16x4UVec128
* LoadExtSVec32x2ToVecI64x2 -> Load32x2SVec128
* LoadExtUVec32x2ToVecI64x2 -> Load32x2UVec128
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Updates binary constants of SIMD instructions to match new opcodes:
* I16x8LoadExtSVec8x8 -> V128Load8x8S
* I16x8LoadExtUVec8x8 -> V128Load8x8U
* I32x4LoadExtSVec16x4 -> V128Load16x4S
* I32x4LoadExtUVec16x4 -> V128Load16x4U
* I64x2LoadExtSVec32x2 -> V128Load32x2S
* I64x2LoadExtUVec32x2 -> V128Load32x2U
* V8x16LoadSplat -> V128Load8Splat
* V16x8LoadSplat -> V128Load16Splat
* V32x4LoadSplat -> V128Load32Splat
* V64x2LoadSplat -> V128Load64Splat
* V8x16Shuffle -> I8x16Shuffle
* V8x16Swizzle -> I8x16Swizzle
* V128AndNot -> V128Andnot
* F32x4DemoteZeroF64x2 -> F32x4DemoteF64x2Zero
* I8x16NarrowSI16x8 -> I8x16NarrowI16x8S
* I8x16NarrowUI16x8 -> I8x16NarrowI16x8U
* I16x8ExtAddPairWiseSI8x16 -> I16x8ExtaddPairwiseI8x16S
* I16x8ExtAddPairWiseUI8x16 -> I16x8ExtaddPairwiseI8x16U
* I32x4ExtAddPairWiseSI16x8 -> I32x4ExtaddPairwiseI16x8S
* I32x4ExtAddPairWiseUI16x8 -> I32x4ExtaddPairwiseI16x8U
* I16x8Q15MulrSatS -> I16x8Q15mulrSatS
* I16x8NarrowSI32x4 -> I16x8NarrowI32x4S
* I16x8NarrowUI32x4 -> I16x8NarrowI32x4U
* I16x8ExtendLowSI8x16 -> I16x8ExtendLowI8x16S
* I16x8ExtendHighSI8x16 -> I16x8ExtendHighI8x16S
* I16x8ExtendLowUI8x16 -> I16x8ExtendLowI8x16U
* I16x8ExtendHighUI8x16 -> I16x8ExtendHighI8x16U
* I16x8ExtMulLowSI8x16 -> I16x8ExtmulLowI8x16S
* I16x8ExtMulHighSI8x16 -> I16x8ExtmulHighI8x16S
* I16x8ExtMulLowUI8x16 -> I16x8ExtmulLowI8x16U
* I16x8ExtMulHighUI8x16 -> I16x8ExtmulHighI8x16U
* I32x4ExtendLowSI16x8 -> I32x4ExtendLowI16x8S
* I32x4ExtendHighSI16x8 -> I32x4ExtendHighI16x8S
* I32x4ExtendLowUI16x8 -> I32x4ExtendLowI16x8U
* I32x4ExtendHighUI16x8 -> I32x4ExtendHighI16x8U
* I32x4DotSVecI16x8 -> I32x4DotI16x8S
* I32x4ExtMulLowSI16x8 -> I32x4ExtmulLowI16x8S
* I32x4ExtMulHighSI16x8 -> I32x4ExtmulHighI16x8S
* I32x4ExtMulLowUI16x8 -> I32x4ExtmulLowI16x8U
* I32x4ExtMulHighUI16x8 -> I32x4ExtmulHighI16x8U
* I64x2ExtendLowSI32x4 -> I64x2ExtendLowI32x4S
* I64x2ExtendHighSI32x4 -> I64x2ExtendHighI32x4S
* I64x2ExtendLowUI32x4 -> I64x2ExtendLowI32x4U
* I64x2ExtendHighUI32x4 -> I64x2ExtendHighI32x4U
* I64x2ExtMulLowSI32x4 -> I64x2ExtmulLowI32x4S
* I64x2ExtMulHighSI32x4 -> I64x2ExtmulHighI32x4S
* I64x2ExtMulLowUI32x4 -> I64x2ExtmulLowI32x4U
* I64x2ExtMulHighUI32x4 -> I64x2ExtmulHighI32x4U
* F32x4PMin -> F32x4Pmin
* F32x4PMax -> F32x4Pmax
* F64x2PMin -> F64x2Pmin
* F64x2PMax -> F64x2Pmax
* I32x4TruncSatSF32x4 -> I32x4TruncSatF32x4S
* I32x4TruncSatUF32x4 -> I32x4TruncSatF32x4U
* F32x4ConvertSI32x4 -> F32x4ConvertI32x4S
* F32x4ConvertUI32x4 -> F32x4ConvertI32x4U
* I32x4TruncSatZeroSF64x2 -> I32x4TruncSatF64x2SZero
* I32x4TruncSatZeroUF64x2 -> I32x4TruncSatF64x2UZero
* F64x2ConvertLowSI32x4 -> F64x2ConvertLowI32x4S
* F64x2ConvertLowUI32x4 -> F64x2ConvertLowI32x4U
|
|
|
|
|
|
|
|
|
| |
Renames the SIMD instructions
* LoadSplatVec8x16 -> Load8SplatVec128
* LoadSplatVec16x8 -> Load16SplatVec128
* LoadSplatVec32x4 -> Load32SplatVec128
* LoadSplatVec64x2 -> Load64SplatVec128
* Load32Zero -> Load32ZeroVec128
* Load64Zero -> Load64ZeroVec128
|
| |
|
|
|
|
| |
I'm not sure what `stack$init` is but I don't think its been
used for many years.
|
|
|
|
|
|
|
|
| |
the builder (#3790)
The builder can receive a HeapType so that callers don't need to set non-nullability
themselves.
Not NFC as some of the callers were in fact still making it nullable.
|
|
|
|
| |
non-funcref table (#3787)
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* Load8LaneVec128 (was LoadLaneVec8x16)
* Load16LaneVec128 (was LoadLaneVec16x8)
* Load32LaneVec128 (was LoadLaneVec32x4)
* Load64LaneVec128 (was LoadLaneVec64x2)
* Store8LaneVec128 (was StoreLaneVec8x16)
* Store16LaneVec128 (was StoreLaneVec16x8)
* Store32LaneVec128 (was StoreLaneVec32x4)
* Store64LaneVec128 (was StoreLaneVec64x2)
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* Load32Zero
* Load64Zero
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* Q15MulrSatSVecI16x8
* ExtMulLowSVecI16x8
* ExtMulHighSVecI16x8
* ExtMulLowUVecI16x8
* ExtMulHighUVecI16x8
* ExtMulLowSVecI32x4
* ExtMulHighSVecI32x4
* ExtMulLowUVecI32x4
* ExtMulHighUVecI32x4
* ExtMulLowSVecI64x2
* ExtMulHighSVecI64x2
* ExtMulLowUVecI64x2
* ExtMulHighUVecI64x2
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* ConvertLowSVecI32x4ToVecF64x2
* ConvertLowUVecI32x4ToVecF64x2
* TruncSatZeroSVecF64x2ToVecI32x4
* TruncSatZeroUVecF64x2ToVecI32x4
* DemoteZeroVecF64x2ToVecF32x4
* PromoteLowVecF32x4ToVecF64x2
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* ExtAddPairwiseSVecI8x16ToI16x8
* ExtAddPairwiseUVecI8x16ToI16x8
* ExtAddPairwiseSVecI16x8ToI32x4
* ExtAddPairwiseUVecI16x8ToI32x4
|
|
|
|
|
| |
See https://webassembly.github.io/spec/core/appendix/custom.html
"Each subsection may occur at most once, and in order of increasing id."
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* ExtendLowSVecI32x4ToVecI64x2
* ExtendHighSVecI32x4ToVecI64x2
* ExtendLowUVecI32x4ToVecI64x2
* ExtendHighUVecI32x4ToVecI64x2
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C/JS APIs for the SIMD instructions
* PopcntVecI8x16
* AbsVecI64x2
* AllTrueVecI64x2
* BitmaskVecI64x2
* EqVecI64x2
* NeVecI64x2
* LtSVecI64x2
* GtSVecI64x2
* LeSVecI64x2
* GeSVecI64x2
|
|
|
|
|
|
|
|
|
|
| |
(#3559)
The spec does not mention traps here, but this is like a JS VM trapping on
OOM - a runtime limitation is reached.
As these are not specced traps, I did not add them to effects.h. Note how
as a result the optimizer happily optimizes into a nop an unused allocation of an
array of size unsigned(-1), which is the behavior we want.
|
| |
|
|
|
|
| |
This is just noticeable when debugging locally and doing a quick print to
stdout.
|
|
|
|
| |
VMs will not convert a 0 or undefined from JS into a wasm null reference - it must be null.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This matches #3747 which makes us not log out reference values, instead
we print just their types.
This also prints a type for non-reference things, replacing a previous
exception, which affects things like SIMD and BigInts, but those trap
anyhow at the JS boundary I believe (or did that change for SIMD?).
Anyhow, printing the type won't give a false "ok" when comparing wasm2js
output to the interpreter, assuming the interpreter prints out a value and
not just a type (which is the case). We could try to do better, but this
code is on the JS side, where we don't have the type - just a string
representation of it, which we'd need to parse etc.
|
|
|
|
|
|
|
| |
This avoids an annoying case where in each iteration we try to remove
every function one by one and keep failing. Instead, we'll skip large
numbers of them when the factor is large at least.
Also shorten some unnecessary logging.
|
|
|
|
| |
Also removes experimental SIMD instructions that were not included in the final
spec proposal.
|
|
|
|
| |
This is needed to make sure globals are printed before element segments,
where `global.get` can appear both as offset and an expression.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When canonical heap types were already present in the global store, for example
during the --roundtrip pass, type canonicalization was not working correctly.
The issue was that the GlobalCanonicalizer was replacing temporary HeapTypes
with their canonical equivalents one type at a time, but the act of replacing a
temporary HeapType use with a canonical HeapType use could change the shape of
later HeapTypes, preventing them from being correctly matched with their
canonical counterparts. This PR fixes that problem by computing all the
temporary-to-canonical heap type replacements before executing them.
To avoid a similar problem when canonicalizing Types, one solution would have
been to pre-calculate the replacements before executing them just like with the
HeapTypes, but that would have required either complex bookkeeping or moving
temporary Types into the global store when they are first canonicalized. That
would have been complicated because unlike for temporary HeapTypeInfos, the
unique_pointer to temporary TypeInfos is not readily available. This PR instead
switches back to using pointer-identity based equality and hashing for
TypeInfos, which works because we only ever canonicalize Types with canonical
children. This change should be a nice performance improvement as well.
Another bug this PR fixes is that shape hashing and equality considered
BasicKind HeapTypes to be different from their corresponding BasicHeapTypes,
which meant that canonicalization could produce different types for the same
type definition depending on whether the definition used a TypeBuilder or not.
The fix is to pre-canonicalize BasicHeapTypes (and Types that have them as
children) during shape hashing and equality. The same mechanism is also used to
simplify Store's canonicalization.
Fixes #3736.
|
|
|
|
| |
Previously an out-of-bounds index would result in an out-of-bounds read during
finalization of the tuple.extract expression.
|
|
|
|
|
|
|
| |
We've been keeping old syntax in the text format parser although they've
been removed from the parser and hardly any test case relies on them.
This PR will remove old syntax support for tables and element segments
and simplify the corresponding parser functions. A few test files were
affected by this that are updated.
|
|
|
|
|
|
|
|
|
| |
The problem is that a tuple with a non-nullable element cannot be stored
to a local. We'd need to split up the tuple, but that raises questions about
what should be allowed in flat IR (we'd need to allow nested tuple ops
in more places). That combination doesn't seem urgent, so add a clear
error for now, and avoid it in the fuzzer.
Avoids #3759 in the fuzzer
|
|
|
|
|
|
|
| |
If we are ignoring implicit traps, and if the cast is from a subtype to a supertype,
then we ignore the possible RTT-related inconsistency and can just drop the
cast.
See #3636
|
|
|
|
|
|
| |
This allows changing a global's value on the commandline in an easy way.
In some toolchains this is useful as the build can contain a global that
indicates something like a logging level, which this can customize.
|
|
|
|
|
| |
The key fix here is to call walkModuleCode so that we pick up on
types that appear only in the table and nowhere else. The rest of
the patch refactors the code so that that is practical.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code used to remove functions it no longer thinks are needed. That is,
if it adds a legalized version of an import, it would remove the illegal
one which is no longer needed. To avoid removing an illegal import that
is still used it checked for ref.func appearances.
But this was bad in two ways:
We need to legalize the ref.funcs too. We can't call an illegal import
in any way, not using a direct call, indirect call, or call by reference of
a ref.func.
It's silly to remove unneeded functions here. We have a pass for that.
This removes the removal of functions, and adds proper updating of
ref.calls, which means to call the stub function that looks like the
original import, but that calls the legalized one and connects things
up properly, exactly the same way as other calls.
Also remove code that checked if we were in the stub/thunk and to
not replace the call there. That code is not needed: no one will ever
call the illegal import, so we do not need to be careful about
preserving such calls.
|
|
|
|
|
|
|
|
| |
The passive keyword has been removed from spec's text format, and now
any data segment that doesn't have an offset is considered as passive.
This PR remove that from both parser and the Print pass, plus all tests
that used that syntax.
Fixes #2339
|
|
|
|
|
|
|
|
|
|
|
|
| |
Log out an i64 as two i32s. That keeps the output consistent regardless of
whether we legalize.
Do not print a reference result. The printed value cannot be compared, as
funcref(10) (where 10 is the index of the function) is not guaranteed to be
the same after opts.
Trap when trying to call an export with a nondefaultable type (instead of
asserting when trying to create zeros for it).
|
|
|
|
|
|
|
|
|
| |
This is similar to the optimization of BrOn in #3719 and #3724. When the
type tells us the kind of input we have, we can tell at compile time what
result we'll get, like ref.is_func of something with type (ref func) will
always return 1, etc.
There are some code size and perf tradeoffs that should be looked into
more and are marked as TODOs.
|
|
|
|
|
|
| |
We missed that in effects.h, with the result that sets could look like
they had no side effects.
Fixes #3754
|
|
|
|
|
|
|
|
|
|
|
| |
In this case, there is a natural place to fix things up right after
removing a parameter (which is where a local gets added). Doing it
there avoids doing work on all functions unnecessarily.
Note that we could do something even simpler here than calling
the generic code: the parameter was not used, so the new local
is not used, and we could just change the type of the local or not
add it at all. Those would be slightly more code though, and add
complexity to the parameter removal method itself.
|
|
|
|
|
|
|
|
|
| |
This was noticed by samparker on LLVM:
https://reviews.llvm.org/D99171
This is apparently a pattern LLVM emits, and doing it there helps by 1-2%
on the real-world Bullet Physics codebase. Seems worthwhile doing here
as well.
|
|
|
|
|
|
|
| |
This is a partial revert of #3669, which removed the old implementation of
Type::getLeastUpperBound that did not correctly handle recursive types. The new
implementation in this PR uses a TypeBuilder to construct LUBs and for recursive
types, it returns a temporary HeapType that has not yet been fully constructed
to break what would otherwise be infinite recursions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several old passes like DeadArgumentElimination and DuplicateFunctionElimination
need to look at all ref.funcs, and they scanned functions for that, but that is not
enough as such an instruction might appear in a global initializer. To fix this, add a
walkModuleCode method.
walkModuleCode is useful when doing the pattern of creating a function-parallel
pass to scan functions quickly, but we also want to do the same scanning of code
at the module level. This allows doing so in a single line.
(It is also possible to just do walk() on the entire module, which will find all code,
but that is not function-parallel. Perhaps we should have a walkParallel() option
to simplify this further in a followup, and that would call walkModuleCode afterwards
etc.)
Also add some missing validation and comments in the validator about issues that
I noticed in relation to the new testcases here.
|
|
|
|
| |
If such a parameter is written to then we create a new local for each
such write, and must handle non-nullability of those new locals.
|