summaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Update nominal binary format to match milestone 4 (#4211)Thomas Lively2021-10-041-14/+37
| | | | | | | | Update the binary format used in --nominal mode to match the format of nominal types in milestone 4. In particular, types without declared supertypes are now emitted using the nominal type codes with either `func` or `data` as their supertypes. This change is hopefully enough to get --nominal mode code running on V8's milestone 4 implementation until the rest of the type system changes can be implemented for use without --nominal.
* Fix roundtripping specialized element segments of table zero (#4212)Alon Zakai2021-10-051-1/+6
| | | | | Before this fix, the first table (index 0) is counted as its element segment having "no table index" even when its type is not funcref, which could break things if that table had a more specialized type.
* Optimize call_indirect of a select of two constants (#4208)Alon Zakai2021-10-041-29/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | (call_indirect ..args.. (select (i32.const x) (i32.const y) (condition) ) ) => (if (condition) (call $func-for-x ..args.. ) (call $func-for-y ..args.. ) ) To do this we must reorder the condition with the args, and also use the args more than once, so place them all in locals. This works towards the goal of polymorphic devirtualization, that is, turning an indirect call of more than one possible target into more than one direct call.
* Fix RefNull in wasm-delegations-fields.def (#4210)Alon Zakai2021-10-041-1/+0
| | | | | The type field is present in all Expressions, but RefNull's delegations marked it as if it were a new field. That meant that we process it twice. This was just some extra work mostly.
* Refactor generic functionality out of ConstantFieldPropagation. NFC (#4209)Alon Zakai2021-10-043-249/+409
| | | | | | | | | This just moves code outside and makes it more generic. One set of functionality are "struct utils", which are tools to scan wasm for info about the usage of struct fields, and to analyze that data. The other tool is a general analysis of nominal subtypes. The code will be useful in a few upcoming passes, so this will avoid a significant amount of code duplication.
* Optimize call_ref+table.get => call_indirect (#4207)Alon Zakai2021-10-041-0/+13
| | | Rather than load from the table and call that reference, call using the table.
* Fix inlining name collision (#4206)Alon Zakai2021-10-042-0/+39
|
* Fix Emscripten build by changing `|` to `||` (#4205)Thomas Lively2021-10-041-1/+1
| | | Emscripten must have rolled in a new warning about using `|` on booleans.
* Use ptrdiff_t instead of long for type iterator diff (#4200)Derek Schuff2021-10-011-1/+1
|
* Remove use of std::iterator (#4199)Derek Schuff2021-10-012-6/+12
| | | It's deprecated in C++17
* Implement table.get (#4195)Alon Zakai2021-09-3021-25/+189
| | | | Adds the part of the spec test suite that this passes (without table.set we can't do it all).
* [Wasm GC] Optimize static (rtt-free) operations (#4186)Alon Zakai2021-09-301-91/+134
| | | | | | | | | | | | Now that they are all implemented, we can optimize them. This removes the big if that ignored static operations, and implements things for them. In general this matches the existing rtt-using case, but there are a few things we can do better, which this does: * A cast of a subtype to a type always succeeds. * A test of a subtype to a type is always 1 (if non-nullable). * Repeated static casts can leave just the most demanding of them.
* Add a SmallSet and use it in LocalGraph. NFC (#4188)Alon Zakai2021-09-293-8/+288
| | | | | | | | | | | | | | | | | A SmallSet starts with fixed storage that it uses in the simplest possible way (linear scan, no sorting). If it exceeds a size then it starts using a normal std::set. So for small amounts of data it avoids allocation and any other overhead. This adds a unit test and also uses it in LocalGraph which provides a large amount of additional coverage. I also changed an unrelated data structure from std::map to std::unordered_map which I noticed while doing profiling in LocalGraph. (And a tiny bit of additional refactoring there.) This makes LocalGraph-using passes like ssa-nomerge and precompute-propagate 10-15% faster on a bunch of real-world codebases I tested.
* Disable partial inlining by default and add a flag for it. (#4191)Alon Zakai2021-09-273-2/+16
| | | | | Locally I saw a 10% speedup on j2cl but reports of regressions have arrived, so let's disable it for now pending investigation. The option added here should make it easy to experiment.
* Inlining: Remove unneeded functions in linear time (#4190)Alon Zakai2021-09-271-2/+7
| | | | | | | | | | By mistake the recent partial inlining work introduced quadratic time into the compiler: erasing a function from the list of functions takes linear time, which is why we have removeFunctions that does a group at a time. This isn't noticeable on small programs, but on j2cl output this makes the inlining-optimizing step 2x faster. See #4165
* [wasm-split] Disallow mixing --profile, --keep-funcs, and --split-funcs (#4187)Thomas Lively2021-09-242-41/+40
| | | | | | | | | | | | | Previously the set of functions to keep was initially empty, then the profile added new functions to keep, then the --keep-funcs functions were added, then the --split-funcs functions were removed. This method of composing these different options was arbitrary and not necessarily intuitive, and it prevented reasonable workflows from working. For example, providing only a --split-funcs list would result in all functions being split out not matter which functions were listed. To make the behavior of these options, and --split-funcs in particular, more intuitive, disallow mixing them and when --split-funcs is used, split out only the listed functions.
* Precompute: Only run a single LocalGraph iteration (#4184)Alon Zakai2021-09-231-19/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | Precompute has a mode in which it propagates results from local.sets to local.gets. That constructs a LocalGraph which is a non-trivial amount of work. We used to run multiple iterations of this, but investigation shows that such opportunities are extremely rare, as doing just a single propagation iteration has no effect on the entire emscripten benchmark suite, nor on j2cl output. Furthermore, we run this pass twice in the normal pipeline (once early, once late) so even if there are such opportunities they may be optimized already. And, --converge is a way to get additional iterations of all passes if a user wants that, so it makes sense not to costly work for more iterations automatically. In effect, 99.99% of the time before this pass we would create the LocalGraph twice: once the first time, then a second time only to see that we can't actually optimize anything further. This PR makes us only create it once, which makes precompute-propagate 10% faster on j2cl and even faster on other things like poppler (33%) and LLVM (29%). See the change in the test suite for an example of a case that does require more than one iteration to be optimized. Note that even there, we only manage to get benefit from a second iteration by doing something that overlaps with another pass (optimizing out an if with condition 0), which shows even more how unnecessary the extra work was. See #4165
* RemoveUnusedBrs: Optimize if-of-if pattern (#4180)Alon Zakai2021-09-231-2/+41
| | | | | | | | | | | | | | | | | | | if (A) { if (B) { C } } => if (A ? B : 0) { C } when B has no side effects, and is fast enough to consider running unconditionally. In that case, we replace an if with a select and a zero, which is the same size, but should be faster and may be further optimized. As suggested in #4168
* Add feature flag for relaxed-simd (#4183)Ng Zhi An2021-09-238-1/+18
|
* [Wasm GC] Implement static (rtt-free) StructNew, ArrayNew, ArrayInit (#4172)Alon Zakai2021-09-2322-125/+451
| | | | | | | | | See #4149 This modifies the test added in #4163 which used static casts on dynamically-created structs and arrays. That was technically not valid (as we won't want users to "mix" the two forms). This makes that test 100% static, which both fixes the test and gives test coverage to the new instructions added here.
* [Refactoring] Code reusage in spliceIntoBlock (#4174)Max Graey2021-09-222-11/+3
|
* Use UniqueDeferringQueue in Precompute (#4179)Alon Zakai2021-09-222-7/+7
|
* Properly error on bad function names in text format (#4177)Alon Zakai2021-09-222-2/+12
|
* Remove unneeded work in Precompute (#4176)Alon Zakai2021-09-221-1/+0
| | | | | isSSA is not called anywhere. See #4165
* [Wasm GC] Fix invalid intermediate IR in OptimizeInstructions (#4169)Alon Zakai2021-09-201-12/+9
| | | | | | | | | | | | We added an optional ReFinalize in OptimizeInstructions at some point, but that is not valid: The ReFinalize only updates types when all other works is done, but the pass works incrementally. The bug the fuzzer found is that a child is changed to be unreachable, and then the parent is optimized before finalize() is called on it, which led to an assertion being hit (as the child was unreachable but not the parent, which should also be). To fix this, do not change types in this pass. Emit an extra block with a declared type when necessary. Other passes can remove the extra block.
* Tiny code cleanups. NFC (#4171)Alon Zakai2021-09-202-2/+1
| | | | | Remove an unnecessary include, and fix a typo in a macro declaration (that macro is not tested as it seems nothing uses DELEGATE_END yet, but I may be soon).
* [Matcher] Add bval for matching boolean literals (#4162)Max Graey2021-09-202-12/+38
|
* Fix an unused variable warning. NFC (#4170)walkingeyerobot2021-09-201-0/+1
|
* [Wasm GC] Add static variants of ref.test, ref.cast, and br_on_cast* (#4163)Alon Zakai2021-09-2016-169/+445
| | | | | | | | | | | | These variants take a HeapType that is the type we intend to cast to, and do not take an RTT. These are intended to be more statically optimizable. For now though this PR just implements the minimum to get them parsing and to get through the optimizer without crashing. Spec: https://docs.google.com/document/d/1afthjsL_B9UaMqCA5ekgVmOm75BVFu6duHNsN9-gnXw/edit# See #4149
* Fix interpreting of ref.as_func|data (#4164)Alon Zakai2021-09-201-2/+2
|
* Partial inlining via function splitting (#4152)Alon Zakai2021-09-172-45/+595
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This PR helps with functions like this: function foo(x) { if (x) { .. lots of work here .. } } If "lots of work" is large enough, then we won't inline such a function. However, we may end up calling into the function only to get a false on that if and immediately exit. So it is useful to partially inline this function, basically by creating a split of it into a condition part that is inlineable function foo$inlineable(x) { if (x) { foo$outlined(); } } and an outlined part that is not inlineable: function foo$outlined(x) { .. lots of work here .. } We can then inline the inlineable part. That means that a call like foo(param); turns into if (param) { foo$outlined(); } In other words, we end up replacing a call and then a check with a check and then a call. Any time that the condition is false, this will be a speedup. The cost here is increased size, as we duplicate the condition into the callsites. For that reason, only do this when heavily optimizing for size. This is a 10% speedup on j2cl. This helps two types of functions there: Java class inits, which often look like "have I been initialized before? if not, do all this work", and also assertion methods which look like "if the input is null, throw an exception".
* [Wasm GC] Optimize away ref.as_non_null going into local.set in TNH mode (#4157)Alon Zakai2021-09-161-10/+25
| | | | | | | | | | If we can remove such traps, we can remove ref.as_non_null if the local type is nullable anyhow. If we support non-nullable locals, however, then do not do this, as it could inhibit specializing the local type later. Do the same for tees which we had existing code for. Background: #4061 (comment)
* Fix regression from #4130 (#4158)Alon Zakai2021-09-161-2/+8
| | | | | | | | | That PR reused the same node twice in the output, which fails on the assertion in BINARYEN_PASS_DEBUG=1 mode. No new test is needed because the existing test suite fails already in that mode. That the PR managed to land seems to say that we are not testing pass-debug mode on our lit tests, which we need to investigate.
* [Wasm GC] Fix OptimizeInstructions on unreachable ref.test (#4156)Alon Zakai2021-09-151-0/+4
| | | | | | Avoids a crash in calling getHeapType when there isn't one. Also add the relevant lit test (and a few others) to the list of files to fuzz more heavily.
* [Wasm GC] Fix lack of packing in array.init (#4153)Alon Zakai2021-09-141-1/+2
|
* [OptimizeInstructions] Optimize memory.fill with constant arguments (#4130)Max Graey2021-09-141-1/+126
| | | | | | | | | | | | | | This is reland of #3071 Do similar optimizations as in #3038 but for memory.fill. `memory.fill(d, v, 0)` ==> `{ drop(d), drop(v) }` only with `ignoreImplicitTraps` or `trapsNeverHappen` `memory.fill(d, v, 1)` ==> `store8(d, v)` Further simplifications can be done only if v is constant because otherwise binary size would increase: `memory.fill(d, C, 1)` ==> `store8(d, (C & 0xFF))` `memory.fill(d, C, 2)` ==> `store16(d, (C & 0xFF) * 0x0101)` `memory.fill(d, C, 4)` ==> `store32(d, (C & 0xFF) * 0x01010101)` `memory.fill(d, C, 8)` ==> `store64(d, (C & 0xFF) * 0x0101010101010101)` `memory.fill(d, C, 16)` ==> `store128(d, i8x16.splat(C & 0xFF))`
* RemoveUnusedBrs::tablify() improvements: handle EqZ and tee (#4144)Alon Zakai2021-09-131-9/+40
| | | | | | | | | | | | tablify() attempts to turns a sequence of br_ifs into a single br_table. This PR adds some flexibility to the specific pattern it looks for, specifically: * Accept i32.eqz as a comparison to zero, and not just to look for i32.eq against a constant. * Allow the first condition to be a tee. If it is, compare later conditions to local.get of that local. This will allow more br_tables to be emitted in j2cl output.
* OptimizeInstructions: Optimize boolean selects (#4147)Alon Zakai2021-09-132-0/+21
| | | | | | | | | | | | If all a select's inputs are boolean, we can sometimes turn the select into an AND or an OR operation, x ? y : 0 => x & y x ? 1 : y => x | y I believe LLVM aggressively canonicalizes to this form. It makes sense to do here too as it is smaller (save the constant 0 or 1). It also allows further optimizations (which is why LLVM does it) but I don't think we have those yet.
* Support new dylink.0 custom section format (#4141)Sam Clegg2021-09-114-8/+90
| | | | | | | See also: spec change: https://github.com/WebAssembly/tool-conventions/pull/170 llvm change: https://reviews.llvm.org/D109595 wabt change: https://github.com/WebAssembly/wabt/pull/1707 emscripten change: https://github.com/emscripten-core/emscripten/pull/15019
* Add an Intrinsics mechanism, and a call.without.effects intrinsic (#4126)Alon Zakai2021-09-109-2/+212
| | | | | | | | | | | | | | | | | | | | | | | | | An "intrinsic" is modeled as a call to an import. We could also add new IR things for them, but that would take more work and lead to less clear errors in other tools if they try to read a binary using such a nonstandard extension. A first intrinsic is added here, call.without.effects This is basically the same as call_ref except that the optimizer is free to assume the call has no side effects. Consequently, if the result is not used then it can be optimized out (as even if it is not used then side effects could have kept it around). Likewise, the lack of side effects allows more reordering and other things. A lowering pass for intrinsics is provided. Rather than automatically lower them to normal wasm at the end of optimizations, the user must call that pass explicitly. A typical workflow might be -O --intrinsic-lowering -O That optimizes with the intrinsic present - perhaps removing calls thanks to it - then lowers it into normal wasm - it turns into a call_ref - and then optimizes further, which would turns the call_ref into a direct call, potentially inline, etc.
* [NFC] Add a comment in MergeBlocks about subtyping (#4142)Alon Zakai2021-09-101-2/+8
| | | Followup to #4137
* [Wasm GC] ArrayInit support (#4138)Alon Zakai2021-09-1022-1/+152
| | | | | | | array.init is like array.new_with_rtt except that it takes as arguments the values to initialize the array with (as opposed to a size and an optional initial value). Spec: https://docs.google.com/document/d/1afthjsL_B9UaMqCA5ekgVmOm75BVFu6duHNsN9-gnXw/edit#
* Refactor MergeBlocks to use iteration; adds Wasm GC support (#4137)Alon Zakai2021-09-091-30/+22
| | | | | | | | | MergeBlocks was written a very long time ago, before the iteration API, so it had a bunch of hardcoded things for specific instructions. In particular, that did not handle GC. This does a small refactoring to use iteration. The refactoring is NFC, but while doing so it adds support for new relevant instructions, including wasm GC.
* Rename isIntrinsicallyNondeterministic() to isGenerative() (#4092)Alon Zakai2021-09-095-34/+31
|
* [OptimizeInstructions] propagate sign for integer multiplication (#4098)Max Graey2021-09-091-0/+50
| | | | | | | | | | | | ```ts -x * -y => (x * y) -x * y => -(x * y) x * -y => -(x * y), if x != C && y != C -x * C => x * -C, if C != C_pot || shrinkLevel != 0 -x * C => -(x * C), otherwise ``` We are skipping propagation when lhs and rhs are constants because this should handled by constant folding. Also skip cases like `-x * 4 -> x * -4` for `shrinkLevel != 0`, as this will be further converted to `-(x << 2)`.
* Make static buffers in numToString thread local (#4134)Thomas Lively2021-09-091-4/+6
| | | | | | | Validation is performed on multiple threads at once and when there are multiple validation failures, those threads can all end up in `numToString` at the same time as they construct their respective error messages. Previously the threads would race on their access to the snprintf buffers, sometimes leading to segfaults. Fix the data races by making the buffers thread local.
* Do not use a library for wasm-split files (#4132)Thomas Lively2021-09-081-4/+2
|
* [wasm-split] Do not add exports of imported memories (#4133)Thomas Lively2021-09-081-12/+14
| | | | | | We can assume that imported memories (and the profiling data they contain) are already accessible from the module's environment, so there's no need to export them. This also avoids needing to add knowledge of "profile-memory" to Emscripten's library_dylink.js.
* Enumerate objects for wasm-split-lib (#4128)Thomas Lively2021-09-071-1/+1
| | | To support CMake 3.10. `add_executable` does not support OBJECT libraries until 3.12.
* Inlining: Track names over multiple iterations, not pointers (#4127)Alon Zakai2021-09-071-2/+6
| | | | | | | | It can be confusing during debugging to keep a map of pointers when we might have removed some of those functions from the module meanwhile (if you iterate over it in some additional debug logging). This change has no observable effect, however, as no bug could have actually occurred in practice given that nothing is done with the pointers in the actual code.