summaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Optimize ReorderGlobals ordering with a new algorithm (#6625)Alon Zakai2024-05-311-71/+326
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old ordering in that pass did a topological sort while sorting by uses both within topological groups and between them. That could be unoptimal in some cases, however, and actually on J2CL output this pass made the binary larger, which is how we noticed this. The problem is that such a toplogical sort keeps topological groups in place, but it can be useful to interleave them sometimes. Imagine this: $c - $a / $e \ $d - $b Here $e depends on $c, etc. The optimal order may interleave the two arms here, e.g. $a, $b, $d, $c, $e. That is because the dependencies define a partial order, and so the arms here are actually independent. Sorting by toplogical depth first might help in some cases, but also is not optimal in general, as we may want to mix toplogical depths: $a, $c, $b, $d, $e does so, and it may be the best ordering. This PR implements a natural greedy algorithm that picks the global with the highest use count at each step, out of the set of possible globals, which is the set of globals that have no unresolved dependencies. So we start by picking the first global with no dependencies and add at at the front; then that unlocks anything that depended on it and we pick from that set, and so forth. This may also not be optimal, but it is easy to make it more flexible by customizing the counts, and we consider 4 sorts here: * Set all counts to 0. This means we only take into account dependencies, and we break ties by the original order, so this is as close to the original order as we can be. * Use the actual use counts. This is the simple greedy algorithm. * Set the count of each global to also contain the counts of its children, so the count is the total that might be unlocked. This gives more weight to globals that can unlock more later, so it is less greedy. * As last, but weight children's counts lower in an exponential way, which makes sense as they may depend on other globals too. In practice it is simple to generate cases where 1, 2, or 3 is optimal (see new tests), but on real-world J2CL I see that 4 (with a particular exponential coefficient) is best, so the pass computes all 4 and picks the best. As a result it will never worsen the size and it has a good chance of improving. The differences between these are small, so in theory we could pick any of them, but given they are all modifications of a single algorithm it is very easy to compute them all with little code complexity. The benefits are rather small here, but this can save a few hundred bytes on a multi-MB Java file. This comes at a tiny compile time cost, but seems worth it for the new guarantee to never regress size.
* LogExecution: Optionally take a module name for the logger function (#6629)YAMAMOTO Takashi2024-05-311-12/+30
| | | | | | | | | --log-execution=NAME will use NAME as the module for the logger function import, rather than infer it. If the name is not provided (--log-execution as before this PR) then we will try to automatically decide which to use ("env", unless we see another module name is used, which can be the case in optimized modules).
* Avoid duplicate type names (#6633)Alon Zakai2024-05-302-4/+21
| | | | | If we replace a type with another, use the original name for the new type, and give the old a unique name (for the rare cases in which it has uses).
* Fix Vacuuming of code leading up to an infinite loop (#6632)Alon Zakai2024-05-291-4/+5
| | | | | We had that logic right in other places, but the specific part of Vacuum that looks at code that leads up to an unreachable did not check for infinite loops, so it could remove them.
* SignaturePruning: Properly handle public types (#6630)Alon Zakai2024-05-293-18/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SignaturePruning pass optimizes away parameters that it proves are safe to remove. It turns out that that does not always match the definition of private types, which is more restrictive. Specifically, if say all the types are in one big rec group and one of them is used on an exported function then all of them are considered public (as the rec group is). However, in closed world, it would be ok to leave that rec group unchanged but to create a pruned version of that type and use it, in cases where we see it is safe to remove a parameter. (See the testcase for a concrete example.) To put it another way, SignaturePruning already proves that a parameter is safe to remove in all the ways that matter. Before this PR, however, the testcase in this PR would error - so this PR is not an optimization but a bugfix, really - because SignaturePruning would see that a parameter is safe to remove but then TypeUpdating would see the type is public and so it would leave it alone, leading to a broken module. This situation is in fact not that rare, and happens on real-world Java code. The reason we did not notice it before is that typically there are no remaining SignaturePruning opportunities late in the process (when other closed world optimizations have typically led to a single big rec group). The concrete fix here is to add additionalPrivateTypes to a few more places in TypeUpdating. We already supported that for cases where a pass knew better than the general logic what can be modified, and this adds that ability to the signature-rewriting logic there. Then SignaturePruning can send in all the types it has proven are safe to modify. * Also necessary here is to only add from additionalPrivateTypes if the type is not already in our list (or we'd end up with duplicates in the final rec group). * Also move newSignatures in SignaturePruning out of the top level, which was confusing (the pass has multiple iterations, and we want each to have a fresh instance).
* [NFC] Save a line in wasm-shell.cpp (#6631)Thomas Lively2024-05-291-2/+1
|
* Remove obsolete parser code (#6607)Thomas Lively2024-05-2913-8992/+3
| | | | | Remove `SExpressionParser`, `SExpressionWasmBuilder`, and `cashew::Parser`. Simplify gen-s-parser.py. Remove the --new-wat-parser and --deprecated-wat-parser flags.
* Use new wast parser in wasm2js (#6606)Thomas Lively2024-05-293-180/+145
| | | | When generating assertions, traverse the `WASTScript` data structure rather than interleaving assertion parsing with emitting.
* Run RemoveUnneededModuleElements early (#6620)Alon Zakai2024-05-291-0/+6
| | | | | | | | | | | | | | | | Doing it before anything else can help a lot if there is a significant amount of dead code that can be removed, as it saves work for all the later passes. We did run this pass if GC was enabled just a few passes later down, but even so it is worthwhile to run it an additional time, and it makes sense to do even without GC (though in typical optimized LLVM outputs there will be little dead code). If there is no dead code then this is wasted work, but this is a fairly fast pass, and I measure no significant slowdown due to this. E.g. on the 35 MB clang.wasm (which is already optimized, so little dead code) it takes around a second, while all of -O2 takes almost two minutes, so the difference is just 1%. On J2CL I measure a 15% speedup in -O3 --closed-world -tnh, and also the binary is 2.5% smaller, which means there is less work for later cycles of -O3.
* OptimizeInstructions: Push StructNew down to help it fold away StructSets ↵Roberto Lublinerman2024-05-281-17/+55
| | | | | | | | | (#6584) Heap stores (struct.set) are optimized into the struct.new when they are adjacent in a statement list. Pushing struct.new down past irrelevant instructions increases the likelihood that it ends up adjacent to sets.
* Fix fuzzer generation of a DataSegment + add validation that would have ↵Alon Zakai2024-05-232-5/+60
| | | | | | | | | | caught it (#6626) The DataSegment was manually added to .dataSegments, but we need to add it using addDataSegment so the maps are updated and getDataSegment(name) works. Also add validation that would have caught this earlier: check that each item in the item lists can be fetched by name.
* [NFC] Fix unused variable warning (#6624)walkingeyerobot2024-05-221-2/+1
|
* Fix TableFill bounds checking (#6621)Alon Zakai2024-05-211-9/+4
| | | The offsets are unsigned.
* Fuzzer: Better fuzzing of globals (#6611)Alon Zakai2024-05-215-35/+72
| | | | | | | | | | | | | With this PR we generate global.gets in globals, which we did not do before. We do that by replacing makeConst (the only thing we did before, for the contents of globals) with makeTrivial, and add code to makeTrivial to sometimes make a global.get. When no suitable global exists, makeGlobalGet will emit a constant, so there is no danger in trying. Also raise the number of globals a little. Also explicitly note the current limitation of requiring all tuple globals to contain tuple.make and nothing else, including not global.get, and avoid adding such invalid global.gets in tuple globals in the fuzzer.
* [table64] Preserve 64-bit table flag when writing binaries (#6610)Sam Clegg2024-05-201-1/+1
|
* SimplifyGlobals: Do not switch a get to use a global of another type (#6605)Alon Zakai2024-05-201-1/+8
| | | | If we wanted to switch types in such cases we'd need to refinalize (which is likely worth doing, though other passes should refine globals anyhow).
* Rewrite wasm-shell to use new wast parser (#6601)Thomas Lively2024-05-1713-321/+429
| | | | | | | | | | | | | | | | | | Use the new wast parser to parse a full script up front, then traverse the parsed script data structure and execute the commands. wasm-shell had previously used the new wat parser for top-level modules, but it now uses the new parser for module assertions as well. Fix various bugs this uncovered. After this change, wasm-shell supports all the assertions used in the upstream spec tests (although not new kinds of assertions introduced in any proposals). Uncomment various `assert_exhaustion` tests that we can now execute. Other kinds of assertions remain commented out in our tests: wasm-shell now supports `assert_unlinkable`, but the interpreter does not eagerly check for the existence of imports, so those tests do not pass. Tests that check for NaNs also remain commented out because they do not yet use the standard syntax that wasm-shell now supports for canonical and arithmetic NaN results, and our interpreter would not pass all of those tests even if they did use the standard syntax.
* Fix generate-dyncalls and directize passed under table64 (#6604)Sam Clegg2024-05-182-13/+14
|
* Fix GlobalRefining's handling of gets in module code and add missing ↵Alon Zakai2024-05-172-5/+8
| | | | | | | | | | | validation (#6603) GlobalRefining did not traverse module code, so it did not update global.gets in other globals. Add missing validation that actually errors on that: We did not check global.get types. These could be separate PRs but it would be difficult to test them separately.
* [Memory64Lowering/Table64Lowering] Avoid dependency in visitation order. ↵Sam Clegg2024-05-162-26/+22
| | | | | NFC (#6600) Followup to #6599.
* [Table64Lowering] Don't assume that all segments are from 64-bit tables (#6599)Sam Clegg2024-05-163-11/+23
| | | | | | | | | This allows modules to contains both 32-bit and 64-bit segment. In order to check the table/memory state when visiting segments we need to ensure that memories/tables are visited only after their segments. The comments in visitTable/visitMemory already assumed this but it wasn't true in practice.
* Fix FlatTable for table64 (#6598)Sam Clegg2024-05-151-1/+1
|
* Fix binary emitting of br_if with a refined value by emitting a cast (#6510)Alon Zakai2024-05-162-2/+172
| | | | | | | | | | | | | | | | This makes us compliant with the wasm spec by adding a cast: we use the refined type for br_if fallthrough values, and the wasm spec uses the branch target. If the two differ, we add a cast after the br_if to make things match. Alternatively we could match the wasm spec's typing in our IR, but we hope the wasm spec will improve here, and so this is will only be temporary in that case. Even if not, this is useful because by using the most refined type in the IR we optimize in the best way possible, and only suffer when we emit fixups in the binary, but in practice those cases are very rare: br_if is almost always dropped rather than used, in real-world code (except for fuzz cases and exploits). We check carefully when a br_if value is actually used (and not dropped) and its type actually differs, and it does not already have a cast. The last condition ensures that we do not keep adding casts over repeated roundtripping.
* Add table64 lowering pass (#6595)Sam Clegg2024-05-157-65/+204
| | | | | Changes to wasm-validator.cpp here are mostly for consistency between elem and data segment validation.
* OptimizeInstructions: Add missing invalidation check in consecutive equality ↵Alon Zakai2024-05-151-0/+22
| | | | | | | | | | | | | | | | | | | | | test (#6596) This existed before #6495 but became noticeable there. We only looked at the fallthrough values in the later part of areConsecutiveInputsEqual, but there can be invalidation due to the non-fallthrough part: (i32.add (local.get $x) (block (local.set $x ..) (local.get $x) ) ) The set can cause the local.get to differ the second time. To fix this, check if the non-fallthrough part invalidates the fallthrough (but only on the right hand side). Fixes #6593
* [EH] Rename option/pass names for new EH (exnref) (#6592)Heejin Ahn2024-05-154-18/+22
| | | | | | | | | | | | | | | | | We settled on the name `WASM_EXNREF` for the new setting in Emscripten for the name for the new EH option. https://github.com/emscripten-core/emscripten/blob/2bc5e3156f07e603bc4f3580cf84c038ea99b2df/src/settings.js#L782-L786 "New EH" sounds vague and I'm not sure if "experimental" is really necessary anyway, given that the potential users of this option is aware that this is a new spec that has been adopted recently. To make the option names consistent, this renames `--translate-to-eh` (the option that only runs the translator) to `--translate-to-exnref`, and `--experimental-new-eh` to `--emit-exnref` (the option that runs the translator at the end of the whole pipeline), and renames the pass and variable names in the code accordingly as well. In case anyone is using the old option names (and also to make the Chromium CI pass), this does not delete the old options.
* Debug location parser: accept arbitrary paths (#6594)Jérôme Vouillon2024-05-151-15/+14
| | | | | The whole annotation was parsed as a keyword, which prevented file paths with non-ascii characters or paths starting with `/` or `.`. Also, there was a typo: one was comparing `fileSize` rather than `lineSize` to `contents->npos`.
* [Strings] Remove operations not included in imported strings (#6589)Thomas Lively2024-05-1526-1002/+215
| | | | | | The stringref proposal has been superseded by the imported JS strings proposal, but the former has many more operations than the latter. To reduce complexity, remove all operations that are part of stringref but not part of imported strings.
* [Strings] Remove stringview types and instructions (#6579)Thomas Lively2024-05-1540-1619/+255
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The stringview types from the stringref proposal have three irregularities that break common invariants and require pervasive special casing to handle properly: they are supertypes of `none` but not subtypes of `any`, they cannot be the targets of casts, and they cannot be used to construct nullable references. At the same time, the stringref proposal has been superseded by the imported strings proposal, which does not have these irregularities. The cost of maintaing and improving our support for stringview types is no longer worth the benefit of supporting them. Simplify the code base by entirely removing the stringview types and related instructions that do not have analogues in the imported strings proposal and do not make sense in the absense of stringviews. Three remaining instructions, `stringview_wtf16.get_codeunit`, `stringview_wtf16.slice`, and `stringview_wtf16.length` take stringview operands in the stringref proposal but cannot be removed because they lower to operations from the imported strings proposal. These instructions are changed to take stringref operands in Binaryen IR, and to allow a graceful upgrade path for users of these instructions, the text and binary parsers still accept but ignore `string.as_wtf16`, which is the instruction used to convert stringrefs to stringviews. The binary writer emits code sequences that use scratch locals and `string.as_wtf16` to keep the output valid. Future PRs will further align binaryen with the imported strings proposal instead of the stringref proposal, for example by making `string` a subtype of `extern` instead of a subtype of `any` and by removing additional instructions that do not have analogues in the imported strings proposal.
* LocalCSE: Fix regression from #6587 by accumulating generativity (#6591)Alon Zakai2024-05-151-28/+44
| | | | | | | | | | | | | #6587 was incorrect: It checked generativity early in an incremental manner, but it did not accumulate that information as we do with hashes. As a result we could end up optimizing something with a generative child, and sadly we lacked testing for that case. This adds incremental generativity computation alongside hashes. It also splits out this check from isRelevant. Also add a test for nested effects (as opposed to generativity), but that already worked before this PR (as we compute effects and invalidation as we go, already).
* Remove redundant ptrType from MemorySize/Grow instructions. NFC (#6590)Sam Clegg2024-05-157-19/+9
| | | | I recently add TableSize/Grow and noticed I didn't need these. It seems they are superfluous.
* Source maps: Allow specifying that an expression has no debug info in text ↵Jérôme Vouillon2024-05-148-36/+95
| | | | | | | | | | | | (#6520) ;;@ with nothing else (no source:line) can be used to specify that the following expression does not have any debug info associated to it. This can be used to stop the automatic propagation of debug info in the text parsers. The text printer has also been updated to output this comment when needed.
* [NFC] Add printing for EffectAnalyzer (#6586)Alon Zakai2024-05-143-0/+147
| | | | | | | | | With this you can do std::cout << effects and get something like EffectAnalyzer { writesMemory hasSideEffects }
* LocalCSE: Ignore traps of code in between (#6588)Alon Zakai2024-05-141-1/+11
| | | | | | | | | | | | | | | | | | Given: (ORIGINAL) (in between) (COPY) We want to change that to (local.tee $temp (ORIGINAL)) (in between) (local.get $temp) It is fine if "in between" traps: then we never reach the new local.get. This is a safer situation than most optimizations because we are not reordering anything, only replacing known-equivalent code.
* LocalCSE: Check effects/generativity early (#6587)Alon Zakai2024-05-144-28/+74
| | | | | | | | | | | | | | | | | | | | Previously we checked late, and as a result might end up failing to optimize when a sub-pattern could have worked. E.g. (call (A) ) (call (A) ) The call cannot be optimized, but the A pattern repeats. Before this PR we'd greedily focus on the entire call and then fail. After this PR we skip the call before we commit to which patterns to try to optimize, so we succeed. Add a isShallowlyGenerative helper here as we compute this step by step as we go. Also remove a parameter to the generativity code (it did not use the features it was passed).
* Simplify scratch local calculation (#6583)Thomas Lively2024-05-132-56/+71
| | | | | | | | | | | Change `countScratchLocals` to return the count and type of necessary scratch locals. It used to encode them as keys in the global map from scratch local types to local indices, which could not handle having more than one scratch local of a given type and was generally harder to reason about due to its use of global state. Take the opportunity to avoid emitting unnecessary scratch locals for `TupleExtract` expressions that will be optimized to not use them. Also simplify and better document the calculation of the mapping from IR indices to binary indices for all locals, scratch and non-scratch.
* [Parser] Parse wast scripts (#6581)Thomas Lively2024-05-136-3/+527
| | | | | | | | | | | The spec tests use an extension of the standard text format that includes various commands and assertions used to test WebAssembly implementations. Add a utility to parse this extended WebAssembly script format and use it in wasm-shell to check that it parses our spec tests without error. Fix a few errors the new parser found in our spec tests. A future PR will rewrite wasm-shell to interpret the results of the new parser, but for now to keep the diff smaller, do not do anything with the new parser except check for errors.
* [interp] Make memory and table handling more consistency. NFC (#6582)Sam Clegg2024-05-131-83/+73
|
* [memory64] Add table64 to existing memory64 support (#6577)Sam Clegg2024-05-1011-79/+172
| | | | | | | Tests is still very limited. Hopefully we can use the upstream spec tests soon and avoid having to write our own tests for `.set/.set/.fill/etc`. See https://github.com/WebAssembly/memory64/issues/51
* [StackIR] Run StackIR during binary writing and not as a pass (#6568)Alon Zakai2024-05-0932-780/+703
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously we had passes --generate-stack-ir, --optimize-stack-ir, --print-stack-ir that could be run like any other passes. After generating StackIR it was stashed on the function and invalidated if we modified BinaryenIR. If it wasn't invalidated then it was used during binary writing. This PR switches things so that we optionally generate, optimize, and print StackIR only during binary writing. It also removes all traces of StackIR from wasm.h - after this, StackIR is a feature of binary writing (and printing) logic only. This is almost NFC, but there are some minor noticeable differences: 1. We no longer print has StackIR in the text format when we see it is there. It will not be there during normal printing, as it is only present during binary writing. (but --print-stack-ir still works as before; as mentioned above it runs during writing). 2. --generate/optimize/print-stack-ir change from being passes to being flags that control that behavior instead. As passes, their order on the commandline mattered, while now it does not, and they only "globally" affect things during writing. 3. The C API changes slightly, as there is no need to pass it an option "optimize" to the StackIR APIs. Whether we optimize is handled by --optimize-stack-ir which is set like other optimization flags on the PassOptions object, so we don't need the old option to those C APIs. The main benefit here is simplifying the code, so we don't need to think about StackIR in more places than just binary writing. That may also allow future improvements to our usage of StackIR.
* [validator] Remove indexType helper function (#6576)Sam Clegg2024-05-091-23/+18
| | | | It seems like that each of the callsites already has looked up the `Memory` object so this helper is not doing anything useful.
* [J2Cl] Make J2clOpts more effective with transitive deps in constant ↵Roberto Lublinerman2024-05-091-43/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | intialization (#6571) Constants that need to be hoisted sometimes are initialized by calling getters of other constants that need to be hoisted. These getters are non-trivial, e.g. (func $getConst1_<once>_@X (result (ref null $A)) (block (result (ref null $A)) (if (i32.eqz (ref.is_null (global.get $$const1@X))) (then (return (global.get $$const1@X)) ) ) (global.set $$const1@X (struct.new $A (i32.const 2))) (global.get $$const1@X) ) (func $getConst2_<once>_@X (result (ref null $A)) (block (result (ref null $A)) (if (i32.eqz (ref.is_null (global.get $$const2@X))) (then (return (global.get $$const2@X)) ) ) (global.set $$const2@X .... expression with (call $getConst1_<once>_@X) ....)) (global.get $$const2@X) ) and can only be simplified after the constants they initialize are hoisted. After the constant is hoisted the getter can be inlined and constants that depend on it for their initialization can now be hoisted. Before this pass, inlining would happen after the pass was run by a subsequent run of the inliner (likely as part of -O3), requiring as many runs of this pass, interleaved with the inliner, as the depth in the call sequence. By having a simpler inliner run as part of the loop in this pass, the pass becomes more effective and more independent of the call depths.
* Fix CFGWalker issue in single-threaded mode (#6573)许鑫权2024-05-091-0/+2
| | | | | In that mode a walk on an entire module will reuse the same CFGWalker instance, so we must manually clear some fields, and we forgot some before.
* Fuzzer: Stop emitting nullable stringviews (#6574)Alon Zakai2024-05-082-5/+26
| | | | | | | | | | | | | As of https://chromium-review.googlesource.com/c/v8/v8/+/5471674 V8 requires stringviews to be non-nullable. It might be possible to make that change in our IR, or to remove views entirely, but for now this PR makes the fuzzer stop emitting nullable stringviews as a workaround to allow us to fuzz current V8. There are still rare corner cases where this pattern is emitted, that we have not tracked down, and so this also makes the fuzzer ignore the error for now.
* wasm-split: Handle RefFuncs (#6513)Alon Zakai2024-05-081-8/+85
| | | | | When we have a ref.func that refers to the secondary module then make a trampoline that calls it directly. The trampoline's call is then fixed up like all direct calls to the secondary module.
* Allow DWARF and multivalue together (#6570)Heejin Ahn2024-05-061-9/+29
| | | | | | | | | This allows writing of binaries with DWARF info when multivalue is enabled. Currently we just crash when both are enabled together. This just assumes, unless we have run DWARF-invalidating passes, all locals added for tuples or scratch locals would have been added at the end of the local list, so just printing all locals in order would preserve the DWARF info. Tuple locals are expanded in place and scratch locals are added at the end.
* Source map fixes (#6550)Jérôme Vouillon2024-05-026-25/+44
| | | | | | | | | | | | | | | | | | | | | | | | | * Keep debug locations at function start The `fn_prolog_epilog.debugInfo` test is failing otherwise, since there was debug information associated to the nop instruction at the beginning of the function. * Do not clear the debug information when reaching the end of the source map The last segment should extend to the end of the function. * Propagate debug location from the function prolog to its first instruction * Fix printing of epilogue location The text parser no longer propagates locations to the epilogue, so we should always print the location if there is one. * Fix debug location smearing The debug location of the last instruction should not smear into the function epilogue, and a debug location from a previous function should not smear into the prologue of the current function.
* Respect the Web limitation on Table size (#6567)Alon Zakai2024-05-016-9/+43
| | | | | Without this the fuzzer can error on differences in behavior between V8 and us. Also move the limitations constants to their own header.
* [StackIR] Support source maps and DWARF with StackIR (#6564)Alon Zakai2024-05-013-5/+21
| | | | | | | | | | | | | | Helping #6509, this fixes debugging support for StackIR, which makes it more possible to use StackIR in more places. The fix is basically just to pass around some more state, and then to call the parent with "please write debug info" at the correct times, mirroring the similar calls in BinaryenIRWriter. The relevant Emscripten tests pass, and the source map test modified here produces identical output in StackIR and non-StackIR modes (the test is also simplified to remove --new-wat-parser which is no longer needed, after which the test can clearly show that StackIR has the same output as BinaryenIR).
* J2CLOpts: Add "precompute" and "remove-unused-brs" as additional cleanupRoberto Lublinerman2024-04-301-0/+2
| | | | This makes the cleanup of bodies of functions that have had constants hoisted from them more effective.