summaryrefslogtreecommitdiff
path: root/test/lit/passes/precompute-gc.wast
Commit message (Collapse)AuthorAgeFilesLines
* Remove extra space printed in empty structs (#6750)Thomas Lively2024-07-161-1/+1
| | | | | | When we switched to the new type printing machinery, we inserted this extra space to minimize the diff in the test output compared with the previous type printer. Improve the quality of the printed output by removing it.
* Revert "Strings: Disable precomputing for now (#6412)" (#6413)Alon Zakai2024-03-201-2/+2
| | | | | | | | This reverts commit 70ac213fce134840609190a5d3a18118a089ba8a. Reverts #6412 On second thought we found a way to make fixing this less urgent, and the code size downsides of this are worrying, so let's revert it.
* Strings: Disable precomputing for now (#6412)Alon Zakai2024-03-201-2/+2
| | | | Our UTF implementation is still not fully stable it seems as we have reports of issues. Disable it for now.
* [NFC] Add the type of the Expression when eliding it (#6362)Alon Zakai2024-02-281-3/+3
| | | | | | | | | | In some cases we don't print an Expression in full if it is unreachable, so we print something instead as a placeholder. This happens in unreachable code when the children don't provide enough info to print the parent (e.g. a StructGet with an unreachable reference doesn't know what struct type to use). This PR prints out the name of the Expression type of such things, which can help debugging sometimes.
* Precompute: Optimize array.len (#6299)Alon Zakai2024-02-121-2/+0
| | | Arrays have immutable length, so we can optimize them like immutable fields.
* Revert "Stop propagating/inlining string constants (#6234)" (#6258)Alon Zakai2024-01-311-5/+3
| | | | | | | | | | This reverts commit 9090ce56fcc67e15005aeedc59c6bc6773220f11. This has the effect of once more propagating string constants from globals to other places (and from non-globals too), which is useful for various optimizations even if it isn't useful in the final output. To fix the final output problem, #6257 added a pass that is run at the end to collect string.const to globals, which allows us to once more propagate strings in the optimizer, now without a downside.
* Update the text syntax for tuple types (#6246)Thomas Lively2024-01-261-5/+5
| | | | Instead of e.g. `(i32 i32)`, use `(tuple i32 i32)`. Having a keyword to introduce the s-expression is more consistent with the rest of the language.
* Stop propagating/inlining string constants (#6234)Alon Zakai2024-01-231-3/+5
| | | | | This causes overhead atm since in VMs executing a string.const will actually allocate a string, and more copies means more allocations. For now, just do not add more. This required changes to two passes: SimplifyGlobals and Precompute.
* Require `then` and `else` with `if` (#6201)Thomas Lively2024-01-041-30/+62
| | | | | | | | | | | | We previously supported (and primarily used) a non-standard text format for conditionals in which the condition, if-true expression, and if-false expression were all simply s-expression children of the `if` expression. The standard text format, however, requires the use of `then` and `else` forms to introduce the if-true and if-false arms of the conditional. Update the legacy text parser to require the standard format and update all tests to match. Update the printer to print the standard format as well. The .wast and .wat test inputs were mechanically updated with this script: https://gist.github.com/tlively/85ae7f01f92f772241ec994c840ccbb1
* Add an arity immediate to tuple.extract (#6172)Thomas Lively2023-12-121-1/+1
| | | | | | | | Once support for tuple.extract lands in the new WAT parser, this arity immediate will let the parser determine how many values it should pop off the stack to serve as the tuple operand to `tuple.extract`. This will usually coincide with the arity of a tuple-producing instruction on top of the stack, but in the spirit of treating the input as a proper stack machine, it will not have to and the parser will still work correctly.
* Update `tuple.make` text format to include arity (#6169)Thomas Lively2023-12-121-2/+2
| | | | | | | | | | Previously, the number of tuple elements was inferred from the number of s-expression children of the `tuple.make` expression, but that scheme would not work in the new wat parser, where s-expressions are optional and cannot be semantically meaningful. Update the text format to take the number of tuple elements (i.e. the tuple arity) as an immediate. This new format will be able to be implemented in the new parser as follow-on work.
* Precompute: Check defaultability, not nullability (#6052)Alon Zakai2023-10-251-8/+33
| | | Followup to #6048, we did not handle nondefaultable tuples because of this.
* Fix unreachable code in LocalGraph by making it imprecise there (#6048)Alon Zakai2023-10-241-8/+83
| | | | | | | | | | Followup to #6046 - the fuzzer found we missed handling the case of the entry itself being unreachable, or of an unreachable loop later. Properly identifying unreachable code requires a flow analysis, unfortunately, so this PR gives up on that and instead allows LocalGraph to be imprecise in unreachable code. That avoids adding any overhead, but does mean the IR may be slightly confusing when debugging. It does not have any optimization downsides, however, as it only affects unreachable code. Also add a dump() impl in that file which helps debugging.
* [NFC] LocalGraph: Optimize params with no sets (#6046)Alon Zakai2023-10-241-6/+39
| | | | | | | | | | | If a local index has no sets, then all gets of that index read from the entry block (a param, or a zero for a local). This is actually a common case, where a param has no other set, and so it is worth optimizing, which this PR does by avoiding any flowing operation at all for that index: we just skip and write the entry block as the source of information for such gets. #6042 on precompute-propagate goes from 3 minutes to 2 seconds with this (!). But that testcase is rather special in that it is a huge function with many, many gets in it, so the overhead we remove is very noticeable there.
* Simplify and consolidate type printing (#5816)Thomas Lively2023-08-241-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When printing Binaryen IR, we previously generated names for unnamed heap types based on their structure. This was useful for seeing the structure of simple types at a glance without having to separately go look up their definitions, but it also had two problems: 1. The same name could be generated for multiple types. The generated names did not take into account rec group structure or finality, so types that differed only in these properties would have the same name. Also, generated type names were limited in length, so very large types that shared only some structure could also end up with the same names. Using the same name for multiple types produces incorrect and unparsable output. 2. The generated names were not useful beyond the most trivial examples. Even with length limits, names for nontrivial types were extremely long and visually noisy, which made reading disassembled real-world code more challenging. Fix these problems by emitting simple indexed names for unnamed heap types instead. This regresses readability for very simple examples, but the trade off is worth it. This change also reduces the number of type printing systems we have by one. Previously we had the system in Print.cpp, but we had another, more general and extensible system in wasm-type-printing.h and wasm-type.cpp as well. Remove the old type printing system from Print.cpp and replace it with a much smaller use of the new system. This requires significant refactoring of Print.cpp so that PrintExpressionContents object now holds a reference to a parent PrintSExpression object that holds the type name state. This diff is very large because almost every test output changed slightly. To minimize the diff and ease review, change the type printer in wasm-type.cpp to behave the same as the old type printer in Print.cpp except for the differences in name generation. These changes will be reverted in much smaller PRs in the future to generally improve how types are printed.
* Use the standard syntax for ref.cast, ref.test and array.new_fixed (#5894)Jérôme Vouillon2023-08-231-4/+4
| | | | | | | | | * Update text output for `ref.cast` and `ref.test` * Update text output for `array.new_fixed` * Update tests with new syntax for `ref.cast` and `ref.test` * Update tests with new `array.new_fixed` syntax
* Further improve ref.cast during finalization (#5882)Thomas Lively2023-08-171-1/+1
| | | | | | We previously improved the nullability and heap type of the ref.cast target type in RefCast::finalize() based on what we knew about its input type. Simplify the code and make this improvement more powerful by using the greatest lower bound of the original cast target and input type.
* Update br_on_cast binary and text format (#5762)Thomas Lively2023-06-121-2/+2
| | | | | | | | | | | | The final versions of the br_on_cast and br_on_cast_fail instructions have two reference type annotations: one for the input type and one for the cast target type. In the binary format, this is represented as a flags byte followed by two encoded heap types. Upgrade all of the tests at once to use the new versions of the instructions and drop support for the old instructions from the text parser. Keep support in the binary parser to avoid breaking users, though. Drop some binary tests of deprecated instruction encodings that would be more effort to update than they're worth. Re-land with fixes of #5734
* Revert "Update br_on_cast binary and text format (#5734)" (#5740)Alon Zakai2023-05-231-2/+2
| | | | | | | This reverts commit b7b1d0df29df14634d2c680d1d2c351b624b4fbb. See comment at the end of #5734: It turns out that dropping the old opcodes causes problems for current users, so let's revert this for now, and later we can figure out how best to do the update.
* Update br_on_cast binary and text format (#5734)Thomas Lively2023-05-191-2/+2
| | | | | | | | | | The final versions of the br_on_cast and br_on_cast_fail instructions have two reference type annotations: one for the input type and one for the cast target type. In the binary format, this is represented as a flags byte followed by two encoded heap types. Since these instructions have been in flux for a while, do not attempt to maintain backward compatibility with older versions of the instructions. Instead, upgrade all of the tests at once to use the new versions of the instructions. Drop some binary tests of deprecated instruction encodings that would be more effort to update than they're worth.
* Print function types on function imports in the text format (#5727)Alon Zakai2023-05-171-1/+1
| | | | The function type should be printed there just like for non-imported functions.
* Remove --nominal from more tests (#5664)Thomas Lively2023-04-131-456/+2
| | | | These tests were easy to remove --nominal from because they already worked with the standard type system as well.
* [Wasm GC] Properly handle packed field truncation in StructNew (#5570)Alon Zakai2023-03-131-0/+18
|
* [Strings] Initial string execution support (#5491)Alon Zakai2023-02-151-0/+40
| | | | | | | | | | Store string data as GC data. Inefficient (one Const per char), but ok for now. Implement string.new_wtf16 and string.const, enough for basic testing. Create strings in makeConstantExpression, which enables ctor-eval support. Print strings in fuzz-exec which makes testing easier.
* Use non-nullable ref.cast for non-nullable input (#5335)Thomas Lively2022-12-091-3/+3
| | | | | | | | | | | | We switched from emitting the legacy `ref.cast_static` instruction to emitting `ref.cast null` in #5331, but that wasn't quite correct. The legacy instruction had polymorphic typing so that its output type was nullable if and only if its input type was nullable. In contrast, `ref.cast null` always has a a nullable output type. Fix our output by instead emitting non-nullable `ref.cast` if the output should be non-nullable. Parse `ref.cast` in binary and text forms as well. Since the IR can only represent the legacy polymorphic semantics, disallow unsupported casts from nullable to non-nullable references or vice versa for now.
* Add standard versions of WasmGC casts (#5331)Thomas Lively2022-12-071-9/+9
| | | | | | | We previously supported only the non-standard cast instructions introduced when we were experimenting with nominal types. Parse the names and opcodes of their standard counterparts and switch to emitting the standard names and opcodes. Port all of the tests to use the standard instructions, but add additional tests showing that the non-standard versions are still parsed correctly.
* Change the default type system to isorecursive (#5239)Thomas Lively2022-11-231-31/+31
| | | | | | | | | | This makes Binaryen's default type system match the WasmGC spec. Update the way type definitions without supertypes are printed to reduce the output diff for MVP tests that do not involve WasmGC. Also port some type-builder.cpp tests from test/example to test/gtest since they needed to be rewritten to work with isorecursive type anyway. A follow-on PR will remove equirecursive types completely.
* Revert "Revert "Make `call_ref` type annotations mandatory (#5246)" (#5265)" ↵Thomas Lively2022-11-161-1/+1
| | | | | (#5266) This reverts commit 570007dbecf86db5ddba8d303896d841fc2b2d27.
* Revert "Make `call_ref` type annotations mandatory (#5246)" (#5265)Thomas Lively2022-11-161-1/+1
| | | | | This reverts commit b2054b72b7daa89b7ad161c0693befad06a20c90. It looks like the necessary V8 change has not rolled out everywhere yet.
* Make `call_ref` type annotations mandatory (#5246)Thomas Lively2022-11-151-1/+1
| | | | They were optional for a while to allow users to gracefully transition to using them, but now make them mandatory to match the upstream WasmGC spec.
* Implement bottom heap types (#5115)Thomas Lively2022-10-071-22/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | These types, `none`, `nofunc`, and `noextern` are uninhabited, so references to them can only possibly be null. To simplify the IR and increase type precision, introduce new invariants that all `ref.null` instructions must be typed with one of these new bottom types and that `Literals` have a bottom type iff they represent null values. These new invariants requires several additional changes. First, it is now possible that the `ref` or `target` child of a `StructGet`, `StructSet`, `ArrayGet`, `ArraySet`, or `CallRef` instruction has a bottom reference type, so it is not possible to determine what heap type annotation to emit in the binary or text formats. (The bottom types are not valid type annotations since they do not have indices in the type section.) To fix that problem, update the printer and binary emitter to emit unreachables instead of the instruction with undetermined type annotation. This is a valid transformation because the only possible value that could flow into those instructions in that case is null, and all of those instructions trap on nulls. That fix uncovered a latent bug in the binary parser in which new unreachables within unreachable code were handled incorrectly. This bug was not previously found by the fuzzer because we generally stop emitting code once we encounter an instruction with type `unreachable`. Now, however, it is possible to emit an `unreachable` for instructions that do not have type `unreachable` (but are known to trap at runtime), so we will continue emitting code. See the new test/lit/parse-double-unreachable.wast for details. Update other miscellaneous code that creates `RefNull` expressions and null `Literals` to maintain the new invariants as well.
* Simplify and fix heap type counting (#5110)Thomas Lively2022-10-041-2/+4
| | | | | Annotations on array.get and array.set were not being counted and the code could generally be simplified since `count` already ignores types that don't need to be counted.
* Emit call_ref with a type annotation (#5079)Thomas Lively2022-09-231-2/+2
| | | | | | | Emit call_ref instructions with type annotations and a temporary opcode. Also implement support for parsing optional type annotations on call_ref in the text and binary formats. This is part of a multi-part graceful update to switch Binaryen and all of its users over to using the type-annotated version of call_ref without there being any breakage.
* [Wasm GC] Support non-nullable locals in the "1a" form (#4959)Alon Zakai2022-08-311-0/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An overview of this is in the README in the diff here (conveniently, it is near the top of the diff). Basically, we fix up nn locals after each pass, by default. This keeps things easy to reason about - what validates is what is valid wasm - but there are some minor nuances as mentioned there, in particular, we ignore nameless blocks (which are commonly added by various passes; ignoring them means we can keep more locals non-nullable). The key addition here is LocalStructuralDominance which checks which local indexes have the "structural dominance" property of 1a, that is, that each get has a set in its block or an outer block that precedes it. I optimized that function quite a lot to reduce the overhead of running that logic after each pass. The overhead is something like 2% on J2Wasm and 0% on Dart (0%, because in this mode we shrink code size, so there is less work actually, and it balances out). Since we run fixups after each pass, this PR removes logic to manually call the fixup code from various places we used to call it (like eh-utils and various passes). Various passes are now marked as requiresNonNullableLocalFixups => false. That lets us skip running the fixups after them, which we normally do automatically. This helps avoid overhead. Most passes still need the fixups, though - any pass that adds a local, or a named block, or moves code around, likely does. This removes a hack in SimplifyLocals that is no longer needed. Before we worked to avoid moving a set into a try, as it might not validate. Now, we just do it and let fixups happen automatically if they need to: in the common code they probably don't, so the extra complexity seems not worth it. Also removes a hack from StackIR. That hack tried to avoid roundtrip adding a nondefaultable local. But we have the logic to fix that up now, and opts will likely keep it non-nullable as well. Various tests end up updated here because now a local can be non-nullable - previous fixups are no longer needed. Note that this doesn't remove the gc-nn-locals feature. That has been useful for testing, and may still be useful in the future - it basically just allows nn locals in all positions (that can't read the null default value at the entry). We can consider removing it separately. Fixes #4824
* Remove RTTs (#4848)Thomas Lively2022-08-051-233/+56
| | | | | | | RTTs were removed from the GC spec and if they are added back in in the future, they will be heap types rather than value types as in our implementation. Updating our implementation to have RTTs be heap types would have been more work than deleting them for questionable benefit since we don't know how long it will be before they are specced again.
* Print heap types in text format in nominal mode (#4316)Alon Zakai2021-11-081-29/+29
| | | | | | | Without this roundtripping may not work in nominal mode, as we might not assign the expected heap types in the right places. Specifically, when the signature matches but the nominal types are distinct then we need to keep them that way (and the sugar in the text format parsing will merge them).
* Return the correct flow when an RTT is breaking (#4310)Thomas Lively2021-11-051-0/+45
| | | Fixes #4308.
* Fix printing of some unreachable GC instructions (#4281)Alon Zakai2021-10-271-8/+16
| | | | | We have separate logic for printing their headers and bodies, and they were not in sync. Specifically, we would not emit drops in the body of a block, which is not valid, and would fail roundtripping on text.
* Switch from "extends" to M4 nominal syntax (#4248)Thomas Lively2021-10-141-2/+455
| | | | | | | | Switch from "extends" to M4 nominal syntax Change all test inputs from using the old (extends $super) syntax to using the new *_subtype syntax for their inputs and also update the printer to emit the new syntax. Add a new test explicitly testing the old notation to make sure it keeps working until we remove support for it.
* Precompute: Track reference identity (#4243)Alon Zakai2021-10-141-22/+518
| | | | | | | | | | | | | | Precompute will run the interpreter on struct.new etc. repeatedly, as it keeps doing so while it propagates constant values around (if one of the operands to the struct.new becomes constant, that could have a noticeable effect). But creating new GC data means we lose track of their identity, and so ref.eq would not work, and we disabled basically all struct operations. This implements identity tracking so we can start to optimize there, which is a step towards using it for immutable field propagation. To track identity, always store the data representing each struct.new in the source using the same GCData structure. That keeps identity consistent no matter how many times we execute.
* [Wasm GC] Implement static (rtt-free) StructNew, ArrayNew, ArrayInit (#4172)Alon Zakai2021-09-231-0/+20
| | | | | | | | | See #4149 This modifies the test added in #4163 which used static casts on dynamically-created structs and arrays. That was technically not valid (as we won't want users to "mix" the two forms). This makes that test 100% static, which both fixes the test and gives test coverage to the new instructions added here.
* Test GC lit tests with --nominal as well (#4043)Thomas Lively2021-08-021-0/+2
| | | | | | | | | | | Add a new run line to every list test containing a struct type to run the test again with nominal typing. In cases where tests do not use any struct subtyping, this does not change the test output. In cases where struct subtyping is used, a new check prefix is introduced to capture the difference that `(extends ...)` clauses are emitted in nominal mode but not in equirecursive mode. There are no other test differences. Some tests are cleaned up along the way. Notably, O_all-features{,-ignore-implicit-traps}.wast is consolidated to a single file.
* Add option to add checks for all items (#3961)Thomas Lively2021-07-021-0/+2
| | | | | | | | Add an --all-items flag to update_lit_checks.py to emit checks for all module items, not just those that match items in the input. Update two tests to use generated input with the new flag. Also, to improve readability, insert an empty line between consecutive checks for different items.
* Generate FileCheck checks for all module items (#3957)Thomas Lively2021-06-281-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead of only generating checks for functions, generate checks for all named top-level module items, such as types, tags, tables, and memories. Because module items can be in different orders in the input and the output but FileCheck checks must follow the order of the output, we need to be slightly clever about when we emit the checks. Consider these types in the input file: ``` (type $A (...)) (type $B (...)) ``` If their order is reversed in the output file, then the checks for $B need to be emitted before the checks for $A, so the resulting module will look like this: ``` ;; CHECK: (type $B (...)) ;; CHECK: (type $A (...)) (type $A (...)) (type $B (...)) ``` Rather than this, which looks nicer but would be incorrect: ``` ;; CHECK: (type $A (...)) (type $A (...)) ;; CHECK: (type $B (...)) (type $B (...)) ```
* [Wasm GC] Fix precomputing of incompatible fallthrough values (#3875)Alon Zakai2021-05-101-0/+110
| | | | | | | | | | | | | | | | | | | | | | Precompute not only computes values, but looks at the fallthrough, (local.set 0 (block ..stuff we can ignore.. ;; the fallthrough we care about - if a value is set to local 0, it is this (i32.const 10) ) ) Normally that is fine, but the fuzzer found a case where it is not: RefCast may return a different type than the fallthrough, even an incompatible type if we try to do something bad like cast a function to a struct. As we may then propagate the value to a place that expects the proper type, this can cause an error. To fix this, check if the precomputed value is a proper subtype. If it is not, then do not look through into the fallthrough, but compute the entire thing. (In the case of a bad RefCast of a func to a struct, it would then indicate a trap happens, and we would not precompute the value.)
* [Wasm GC] Fix precompute on GC data (#3810)Alon Zakai2021-04-151-1/+358
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes precomputation on GC after #3803 was too optimistic. The issue is subtle. Precompute will repeatedly evaluate expressions and propagate their values, flowing them around, and it ignores side effects when doing so. For example: (block ..side effect.. (i32.const 1) ) When we evaluate that we see there are side effects, but regardless of them we know the value flowing out is 1. So we can propagate that value, if it is assigned to a local and read elsewhere. This is not valid for GC because struct.new and array.new have a "side effect" that is noticeable in the result. Each time we call struct.new we get a new struct with a new address, which ref.eq can distinguish. So when this pass evaluates the same thing multiple times it will get a different result. Also, we can't precompute a struct.get even if we know the struct, not unless we know the reference has not escaped (where a call could modify it). To avoid all that, do not precompute references, aside from the trivially safe ones like nulls and function references (simple constants that are the same each time we evaluate the expression emitting them). precomputeExpression() had a minor bug which this fixes. It checked the type of the expression to see if we can create a constant for it, but really it should check the value - since (separate from this PR) we have no way to emit a "constant" for a struct etc. Also that only matters if replaceExpression is true, that is, if we are replacing with a constant; if we just want the value internally, we have no limit on that. Also add Literal support for comparing GC refs, which is used by ref.eq. Without that tiny fix the tests here crash. This adds a bunch of tests, many for corner cases that we don't handle (since the PR makes us not propagate GC references). But they should be helpful if/when we do, to avoid the mistakes in #3803
* [Wasm GC] Full precompute support for GC (#3803)Alon Zakai2021-04-131-0/+37
The precompute pass ignored all reference types, but that was overly pessimistic: we can precompute some of them, namely a null and a reference to a function are fully precomputable, etc. To allow that to work, add missing integration in getFallthrough as well. With this, we can precompute quite a lot of field accesses in the existing -Oz testcase, as can be seen from the output. That testcase runs --fuzz-exec so it prints out all those logged values, proving they have not changed.