summaryrefslogtreecommitdiff
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
...
* Heap2Local: Use escape analysis to turn heap allocations into local data (#3866)Alon Zakai2021-05-122-24/+1595
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we allocate some GC data, and do not let the reference escape, then we can replace the allocation with locals, one local for each field in the allocation basically. This avoids the allocation, and also allows us to optimize the locals further. On the Dart DeltaBlue benchmark, this is a 24% speedup (making it faster than the JS version, incidentially), and also a 6% reduction in code size. The tests are not the best way to show what this does, as the pass assumes other passes will clean up after. Here is an example to clarify. First, in pseudocode: ref = new Int(42) do { ref.set(ref.get() + 1) } while (import(ref.get()) That is, we allocate an int on the heap and use it as a counter. Unnecessarily, as it could be a normal int on the stack. Wat: (module ;; A boxed integer: an entire struct just to hold an int. (type $boxed-int (struct (field (mut i32)))) (import "env" "import" (func $import (param i32) (result i32))) (func "example" (local $ref (ref null $boxed-int)) ;; Allocate a boxed integer of 42 and save the reference to it. (local.set $ref (struct.new_with_rtt $boxed-int (i32.const 42) (rtt.canon $boxed-int) ) ) ;; Increment the integer in a loop, looking for some condition. (loop $loop (struct.set $boxed-int 0 (local.get $ref) (i32.add (struct.get $boxed-int 0 (local.get $ref) ) (i32.const 1) ) ) (br_if $loop (call $import (struct.get $boxed-int 0 (local.get $ref) ) ) ) ) ) ) Before this pass, the optimizer could do essentially nothing with this. Even with this pass, running -O1 has no effect, as the pass is only used in -O2+. However, running --heap2local -O1 leads to this: (func $0 (local $0 i32) (local.set $0 (i32.const 42) ) (loop $loop (br_if $loop (call $import (local.tee $0 (i32.add (local.get $0) (i32.const 1) ) ) ) ) ) ) All the GC heap operations have been removed, and we just have a plain int now, allowing a bunch of other opts to run. That output is basically the optimal code, I think.
* Printing: Add a comment when we cannot emit something (#3878)Alon Zakai2021-05-111-3/+3
| | | | | | If we can't emit something, and instead emit a replacement for it (as is the case for a StructSet with an unreachable RTT, so we have no known heap type for it), add a comment that mentions it is a replacement. This might avoid confusion while debugging.
* ExtractFunction: Do not always remove the memory and table (#3877)Alon Zakai2021-05-112-2/+45
| | | | | | | | | Instead, run RemoveUnusedModuleElements, which does that sort of thing. That is, this pass just "extracts" the function by turning all others into imports, and then they should almost all be removable via RemoveUnusedModuleElements, depending on whether they are used in the table or not, whether the extracted function calls them, etc. Without this, we would error if a function was in the table, and so this fixes #3876
* [Wasm GC] Fix StructSet::finalize on an unreachable value (#3874)Alon Zakai2021-05-104-17/+69
| | | | | | Also fix printing of unreachable StructSets, which must handle the case of an unreachable reference, which means we do not know the RTT, and so we must print a replacement for the StructSet somehow. Emit a block with drops, fixing the old behavior which was missing the drops.
* [Wasm GC] Fix precomputing of incompatible fallthrough values (#3875)Alon Zakai2021-05-101-0/+110
| | | | | | | | | | | | | | | | | | | | | | Precompute not only computes values, but looks at the fallthrough, (local.set 0 (block ..stuff we can ignore.. ;; the fallthrough we care about - if a value is set to local 0, it is this (i32.const 10) ) ) Normally that is fine, but the fuzzer found a case where it is not: RefCast may return a different type than the fallthrough, even an incompatible type if we try to do something bad like cast a function to a struct. As we may then propagate the value to a place that expects the proper type, this can cause an error. To fix this, check if the precomputed value is a proper subtype. If it is not, then do not look through into the fallthrough, but compute the entire thing. (In the case of a bad RefCast of a func to a struct, it would then indicate a trap happens, and we would not precompute the value.)
* Implement all Builder::replaceWithIdenticalType() cases as best we can (#3872)Alon Zakai2021-05-101-0/+21
| | | | | | The method had TODOs which it halted on. But we should not halt the entire program, as this is a best-effort attempt to replace a node with something simpler of the same type (we call it when we know the value is not actually used).
* [Wasm GC] Fix casting code in interpreter (#3873)Alon Zakai2021-05-102-1/+30
| | | | | | | | | | The logic there would construct the cast value separately for functions and data (as we must), and then in an attempt to share code, would then check if the cast succeed or not (and if not, do nothing with the cast value). But this was wrong, as in some weird casts (like a struct to a function) we cannot construct a valid cast value, and we error there. Instead, check if the cast works first, once we know enough to do so, and only then construct the cast value if so.
* [Wasm GC] Fix Array initialization of a packed value (#3868)Alon Zakai2021-05-072-1/+31
| | | | | | We truncated and extended packed values in get and set, but not during initialization. Found by the fuzzer.
* Fix interpreting of a ref.cast of a function that is not on the module (#3863)Alon Zakai2021-05-061-0/+41
| | | | | | | | | | | | | | | | | | | | | Binaryen allows optimizing functions in function-parallel passes while the module is still being built, that is, while not all the other functions have even been added to the module yet. Since the removal of asm2wasm that has not been heavily tested, but the fuzzer found a closely related bug: in passes like inlining-optimizing, that inline and then optimize the functions we inlined into, the mechanism for optimizing only the relevant functions is to create a module with only some of them. (We only want to optimize the relevant ones, that we inlined into, because this happens after the main optimization pipeline - we don't want to re-optimize all the functions if we just inlined into one of them.) The specific bug here is that ref.cast of a funcref looked up the target function on the module (in order to get its signature, to see if the cast has the right RTT for it). The fix is to return a nonconstant flow in that case, as it is something we cannot precompute. (This does mean we may miss some optimization opportunities, but as in the case of where we optimize functions before the module is fully built up, we do still get 99% of function-local optimizations that way, and a subsequent round of full optimizations can be done later if necessary.)
* Fix typo in function name: BinayenElementSegmentIsPassive (#3862)Paulo Matos2021-05-061-1/+1
| | | Becomes BinaryenElementSegmentIsPassive
* Add LocalGraph::equivalent (#3848)Alon Zakai2021-04-292-0/+119
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This compares two local.gets and checks whether we are sure they are equivalent, that is, they contain the same value. This does not solve the general problem, but uses the existing info to get a positive answer for the common case where two gets only receive values by a single set, like (local.set $x ..) (a use.. (local.get $x)) (another use.. (local.get $x)) If they only receive values from the same single set, then we know it must dominate them. The only risk is that the set is "in between" the gets, that is, that the set occurs after one get and before the other. That can happen in a loop in theory, (loop $loop (use (local.get $x)) (local.set $x ..some new value each iteration..) (use (local.get $x)) (br_if $loop ..) ) Both of those gets receive a value from the set, and they may be different values, from different loop iterations. But as mentioned in the source code, this is not a problem since wasm always has a zero-initialization value, and so the first local.get in that loop would have another set from which it can receive a value, the function entry. (The only way to avoid that is for this entire code to be unreachable, in which case nothing matters.) This will be useful in dead store elimination, which has to use this to reason about references and pointers in order to be able to do anything useful with GC and memory.
* UniqueDeferredQueue improvements (#3847)Alon Zakai2021-04-291-0/+38
| | | | | | | | Add clear(). Add UniqueNonrepeatingDeferredQueue which also has the property that it never repeats values in the output. Also add unit tests.
* Generic type traversal and fix a LUB bug (#3844)Thomas Lively2021-04-291-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | Fixes #3843. The issue was that during LUB type building, Hopcroft's algorithm was only running on the temporary HeapTypes in the TypeBuilder and not considering the globally canonical HeapTypes that were reachable from the temporary HeapTypes. That meant that temporary HeapTypes that referred to and were equirecursively equivalent to the globally canonical types were not properly minimized and could not be matched to the corresponding globally canonical HeapTypes. The fix is to run Hopcroft's algorithm on the complete HeapType graph, not just the root HeapTypes. Since there were already multiple implementations of type graph traversal, this PR consolidates them into a generic type traversal utility. Although this creates more boilerplate, it also reduces code duplication and will be easier to maintain and reuse. Now that Hopcroft's algorithm partitions can contain globally canonical HeapTypes, this PR also updates the `translateToTypes` step of shape canonicalization to reuse the globally canonical types unchanged, since they must already be minimal. Without this change, `translateToTypes` could end up incorrectly inserting temporary HeapTypes into the globally canonical type graph. Unfortunately, this change complicates the interface presented by `ShapeCanonicalizer` because it no longer owns the HeapTypeInfos backing all of the minimized types. Fixing this is left as future work.
* OptimizeInstructions: Do not change unreachability in if/select changes (#3840)Alon Zakai2021-04-231-0/+50
| | | | | | | | | | | | | | | | | | | For example, (if (result i32) (local.get $x) (return (local.get $y) ) (return (local.get $z) ) ) If we move the returns outside we'd become unreachable, but we should not make such type changes in this pass (they are handled by DCE and Vacuum). (found by the fuzzer)
* OptimizeInstructions: Handle EqZInt64 on an if/select arm, not just 32 (#3837)Alon Zakai2021-04-221-0/+44
|
* OptimizeInstructions: Fix/ignore eqz hoisting of if with unreachable arm (#3835)Alon Zakai2021-04-221-0/+40
| | | | | | | We tried to ignore unreachable code, but only checked the type of the entire node. But an arm might be unreachable, and after moving code around that requires more work to update the type. But such cases are best left to DCE anyhow, so just check for any unreachability and stop there.
* [Wasm GC] Skip DeadArgumentElimination of an RTT parameter (#3834)Alon Zakai2021-04-211-0/+20
| | | | We could more carefully see when a local is not needed there, but atm we always add one, and that doesn't work for something nondefaultable.
* Generalize moving of identical code from if/select arms (#3833)Alon Zakai2021-04-212-25/+166
| | | | | | | | | | | | | | | Effects are fine in the moved code, if we are doing so on an if (which runs just one arm anyhow). Allow unreachable, which lets us hoist returns for example. Allow none, which lets us hoist drop and call for example. For this we also need to be careful with subtyping, as at least drop is polymorphic, so the child types may not have an LUB (see example in code). Adds a small ShallowEffectAnalyzer child of EffectAnalyzer that calls visit to just do a shallow analysis (instead of walk which walks the children).
* OptimizeInstructions: Move identical unary code out of if/select arms (#3828)Alon Zakai2021-04-214-4/+271
| | | | | | | | | | | | | | | | | | | | | (select (foo (X) ) (foo (Y) ) (condition) ) => (foo (select (X) (Y) (condition) ) ) To make this simpler, refactor optimizeTernary to be templated.
* [Wasm GC] Fix handleNonDefaultableLocals on tees (#3830)Alon Zakai2021-04-211-0/+49
| | | | | | | When we change a local's type, as we do in that method when we turn a non-nullable (invalid) local into a nullable (valid) one, we must update tees as well as gets - their type must match the changed local type, and we must cast them so that their users do not see a change.
* Update test expectations after colliding landings (#3827)Alon Zakai2021-04-201-1/+1
|
* Fix element segment ordering in Print (#3818)Abbas Mashayekh2021-04-2024-50/+56
| | | | | | | | | | | We used to print active element segments right after corresponding tables, and passive segments came after those. We didn't print internal segment names, and empty segments weren't being printed at all. This meant that there was no way for instructions to refer to those table segments after round tripping. This will fix those issues by printing segments in the order they were defined, including segment names when necessary and not omitting empty segments anymore.
* Run spec test all at once after binary transform (#3817)Abbas Mashayekh2021-04-201-21/+130
| | | | | | | | | | | | | | | | | | #3792 added support for module linking and (register command to wasm-shell, but forgot about three problems: - Splitting spec tests prevents linking test modules together. - Registered modules may still be used in assertions or an invoke - Modules may re-export imported objects This PR appends transformed modules after binary checks to a spec.wast file, plus assertion tests and register commands. Then runs wasm-shell on the whole file. It also keeps both the module name and its registered name available in wasm-shell for use in shell commands and linked modules. Furthermore, it correctly finds the module where an object is defined even if it is imported and re-exported several times. The updated version of imports.wast spec test is enabled to verify the fixes.
* Minor wasm-split improvements (#3825)Thomas Lively2021-04-202-0/+171
| | | | | | | | | | | - Support functions appearing more than once in the table. It turns out we were assuming and asserting that functions would appear at most once, but we weren't making use of that assumption in any way. - Make TableSlotManager::getSlot take a function Name rather than a RefFunc expression to avoid allocating and leaking unnecessary expressions. - Add and use a Builder interface for building TableElementSegments to make them more similar to other module-level items.
* Implement missing if restructuring (#3819)Alon Zakai2021-04-203-11/+266
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing restructuring code could turn a block+br_if into an if in simple cases, but it had some TODOs that I noticed were helpful on GC benchmarks. One limitation was that we need to reorder the condition and the value, (block (br_if (value) (condition) ) (...) ) => (if (condition) (value) (...) ) The old code checked for side effects in the condition. But it is ok for it to have side effects if they can be reordered with the value (for example, if the value is a constant then it definitely does not care about side effects in the condition). The other missing TODO is to use a select when we can't use an if: (block (drop (br_if (value) (condition) ) ) (...) ) => (select (value) (...) (condition) ) In this case we do not reorder the condition and the value, but we do reorder the condition with the rest of the block.
* Optimize if/select with one arm an EqZ and another a 0 or a 1 (#3822)Alon Zakai2021-04-203-60/+177
| | | | | | | | | | | | | | | | | | | | | | (select (i32.eqz (X)) (i32.const 0|1) (Y) ) => (i32.eqz (select (X) (i32.const 1|0) (Y) ) ) This is beneficial as the eqz may be folded into something on the outside. I see this pattern in real-world code, both a GC benchmark (which is why I noticed it) and it shrinks code size by tiny amounts on the emscripten benchmark suite as well.
* [Wasm GC] Reorder ref.as_non_null with tee and cast (#3820)Alon Zakai2021-04-191-0/+113
| | | | In both cases doing the ref.as_non_null last is beneficial as we have optimizations that can remove it based on where it is consumed.
* [Wasm GC] Optimize reference identity checks (#3814)Alon Zakai2021-04-192-1/+202
| | | | | * Note that ref.cast has a fallthrough value. * Optimize ref.eq on identical inputs.
* LegalizeJSInterface: Remove illegal imports once they are no longer used (#3815)Sam Clegg2021-04-163-8/+3
| | | | | | | | | This prevents used imports which also happen to have duplicate names and therefore cannot be provided by wasm (JS is happen to fill these in with polymorphic JS functions). I noticed this when working on emscripten and directly hooking modules together. I was seeing failures, but not in release builds (because wasm-opt would mop these up in release builds).
* [Wasm GC] Fix precompute on GC data (#3810)Alon Zakai2021-04-153-17/+487
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes precomputation on GC after #3803 was too optimistic. The issue is subtle. Precompute will repeatedly evaluate expressions and propagate their values, flowing them around, and it ignores side effects when doing so. For example: (block ..side effect.. (i32.const 1) ) When we evaluate that we see there are side effects, but regardless of them we know the value flowing out is 1. So we can propagate that value, if it is assigned to a local and read elsewhere. This is not valid for GC because struct.new and array.new have a "side effect" that is noticeable in the result. Each time we call struct.new we get a new struct with a new address, which ref.eq can distinguish. So when this pass evaluates the same thing multiple times it will get a different result. Also, we can't precompute a struct.get even if we know the struct, not unless we know the reference has not escaped (where a call could modify it). To avoid all that, do not precompute references, aside from the trivially safe ones like nulls and function references (simple constants that are the same each time we evaluate the expression emitting them). precomputeExpression() had a minor bug which this fixes. It checked the type of the expression to see if we can create a constant for it, but really it should check the value - since (separate from this PR) we have no way to emit a "constant" for a struct etc. Also that only matters if replaceExpression is true, that is, if we are replacing with a constant; if we just want the value internally, we have no limit on that. Also add Literal support for comparing GC refs, which is used by ref.eq. Without that tiny fix the tests here crash. This adds a bunch of tests, many for corner cases that we don't handle (since the PR makes us not propagate GC references). But they should be helpful if/when we do, to avoid the mistakes in #3803
* Rename emscripten metadata key to reflect new unmangled names (#3813)Sam Clegg2021-04-1533-33/+33
| | | | | | Turns out just removing the mangling wasn't enough for emscripten to support both before and after versions. See https://github.com/WebAssembly/binaryen/pull/3785
* Remove renaming of __wasm_call_ctors (#3811)Sam Clegg2021-04-154-8/+8
| | | See https://github.com/emscripten-core/emscripten/issues/13893
* Remove final remnants of name mangling from wasm-emscripten (#3785)Sam Clegg2021-04-1510-41/+41
| | | See https://github.com/emscripten-core/emscripten/pull/13847
* [Wasm GC] Do not inline a function with an RTT parameter (#3808)Alon Zakai2021-04-142-0/+24
| | | | | Inlined parameters become locals, and rtts cannot be handled as locals, unlike non-nullable values which we can at least fix up. So do not inline functions with rtt params.
* [Wasm GC] Full precompute support for GC (#3803)Alon Zakai2021-04-132-85/+50
| | | | | | | | | | | | The precompute pass ignored all reference types, but that was overly pessimistic: we can precompute some of them, namely a null and a reference to a function are fully precomputable, etc. To allow that to work, add missing integration in getFallthrough as well. With this, we can precompute quite a lot of field accesses in the existing -Oz testcase, as can be seen from the output. That testcase runs --fuzz-exec so it prints out all those logged values, proving they have not changed.
* Fuzzer: Distinguish traps from host limitations (#3801)Alon Zakai2021-04-122-1/+29
| | | | | | | | | Host limitations are arbitrary and can be modified by optimizations, so ignore them. For example, if the optimizer removes allocations then a host limit on an allocation error may vanish. Or, an optimization that removes recursion and replaces it with a loop may avoid a host limit on call depth (that is not done currently, but might some day). This removes a class of annoying false positives in the fuzzer.
* [Wasm GC] Optimize away unnecessary non-null assertions (#3800)Alon Zakai2021-04-121-9/+82
| | | | | | ref.as_non_null is not needed if the value flows into a place that traps on null anyhow. We replace a trap on one instruction with a trap on another, but we allow such things (and even changing trap types, which does not happen here).
* Rename SIMD extending load instructions (#3798)Daniel Wirtz2021-04-122-33/+15
| | | | | | | | | Renames the SIMD instructions * LoadExtSVec8x8ToVecI16x8 -> Load8x8SVec128 * LoadExtUVec8x8ToVecI16x8 -> Load8x8UVec128 * LoadExtSVec16x4ToVecI32x4 -> Load16x4SVec128 * LoadExtUVec16x4ToVecI32x4 -> Load16x4UVec128 * LoadExtSVec32x2ToVecI64x2 -> Load32x2SVec128 * LoadExtUVec32x2ToVecI64x2 -> Load32x2UVec128
* Rename various SIMD load instructions (#3795)Daniel Wirtz2021-04-112-7/+7
| | | | | | | | | Renames the SIMD instructions * LoadSplatVec8x16 -> Load8SplatVec128 * LoadSplatVec16x8 -> Load16SplatVec128 * LoadSplatVec32x4 -> Load32SplatVec128 * LoadSplatVec64x2 -> Load64SplatVec128 * Load32Zero -> Load32ZeroVec128 * Load64Zero -> Load64ZeroVec128
* RefFunc: Validate that the type is non-nullable, and avoid possible bugs in ↵Alon Zakai2021-04-081-1/+3
| | | | | | | | the builder (#3790) The builder can receive a HeapType so that callers don't need to set non-nullability themselves. Not NFC as some of the callers were in fact still making it nullable.
* Add v128.load/storeN_lane SIMD instructions to C/JS API (#3784)Daniel Wirtz2021-04-084-0/+275
| | | | | | | | | | | Adds C/JS APIs for the SIMD instructions * Load8LaneVec128 (was LoadLaneVec8x16) * Load16LaneVec128 (was LoadLaneVec16x8) * Load32LaneVec128 (was LoadLaneVec32x4) * Load64LaneVec128 (was LoadLaneVec64x2) * Store8LaneVec128 (was StoreLaneVec8x16) * Store16LaneVec128 (was StoreLaneVec16x8) * Store32LaneVec128 (was StoreLaneVec32x4) * Store64LaneVec128 (was StoreLaneVec64x2)
* Add v128.load32/64_zero SIMD instructions to C/JS API (#3783)Daniel Wirtz2021-04-084-0/+42
| | | | | Adds C/JS APIs for the SIMD instructions * Load32Zero * Load64Zero
* Add new SIMD multiplication instructions to C/JS API (#3782)Daniel Wirtz2021-04-084-0/+260
| | | | | | | | | | | | | | | | Adds C/JS APIs for the SIMD instructions * Q15MulrSatSVecI16x8 * ExtMulLowSVecI16x8 * ExtMulHighSVecI16x8 * ExtMulLowUVecI16x8 * ExtMulHighUVecI16x8 * ExtMulLowSVecI32x4 * ExtMulHighSVecI32x4 * ExtMulLowUVecI32x4 * ExtMulHighUVecI32x4 * ExtMulLowSVecI64x2 * ExtMulHighSVecI64x2 * ExtMulLowUVecI64x2 * ExtMulHighUVecI64x2
* Add new SIMD conversion instructions to C/JS API (#3781)Daniel Wirtz2021-04-084-0/+102
| | | | | | | | | Adds C/JS APIs for the SIMD instructions * ConvertLowSVecI32x4ToVecF64x2 * ConvertLowUVecI32x4ToVecF64x2 * TruncSatZeroSVecF64x2ToVecI32x4 * TruncSatZeroUVecF64x2ToVecI32x4 * DemoteZeroVecF64x2ToVecF32x4 * PromoteLowVecF32x4ToVecF64x2
* Add iNxM.extadd_pairwise_* SIMD instructions to C/JS API (#3780)Daniel Wirtz2021-04-084-0/+68
| | | | | | | Adds C/JS APIs for the SIMD instructions * ExtAddPairwiseSVecI8x16ToI16x8 * ExtAddPairwiseUVecI8x16ToI16x8 * ExtAddPairwiseSVecI16x8ToI32x4 * ExtAddPairwiseUVecI16x8ToI32x4
* Add i64x2.extend_low/high_* SIMD instructions to C/JS API (#3778)Daniel Wirtz2021-04-074-0/+68
| | | | | | | Adds C/JS APIs for the SIMD instructions * ExtendLowSVecI32x4ToVecI64x2 * ExtendHighSVecI32x4ToVecI64x2 * ExtendLowUVecI32x4ToVecI64x2 * ExtendHighUVecI32x4ToVecI64x2
* Add various SIMD instructions to C/JS API (#3777)Daniel Wirtz2021-04-074-0/+188
| | | | | | | | | | | | | Adds C/JS APIs for the SIMD instructions * PopcntVecI8x16 * AbsVecI64x2 * AllTrueVecI64x2 * BitmaskVecI64x2 * EqVecI64x2 * NeVecI64x2 * LtSVecI64x2 * GtSVecI64x2 * LeSVecI64x2 * GeSVecI64x2
* [GC] Do not crash on unreasonable GC array allocations in interpreter; trap ↵Alon Zakai2021-04-072-4/+19
| | | | | | | | | | (#3559) The spec does not mention traps here, but this is like a JS VM trapping on OOM - a runtime limitation is reached. As these are not specced traps, I did not add them to effects.h. Note how as a result the optimizer happily optimizes into a nop an unused allocation of an array of size unsigned(-1), which is the behavior we want.
* [RT] Add type to tables and element segments (#3763)Abbas Mashayekh2021-04-066-8/+318
|
* Emit dollar signs when relevant while debugging s-expression elements (#3693)Alon Zakai2021-04-062-6/+6
| | | | This is just noticeable when debugging locally and doing a quick print to stdout.