| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
| |
Test that we can still parse the old annotated form as well.
|
|
|
|
|
|
|
|
|
| |
`array` is the supertype of all defined array types and for now is a subtype of
`data`. (Once `data` becomes `struct` this will no longer be true.) Update the
binary and text parsing of `array.len` to ignore the obsolete type annotation
and update the binary emitting to emit a zero in place of the old type
annotation and the text printing to print an arbitrary heap type for the
annotation. A follow-on PR will add support for the newer unannotated version of
`array.len`.
|
|
|
|
|
|
| |
As the number of basic heap types has grown, the complexity of the subtype and
LUB calculations has grown as well. To ensure that they are correct, test the
complete matrix of basic types and trivial user-defined types. Fix the subtype
calculation to make string types subtypes of `any` to make the test pass.
|
| |
|
|
|
|
| |
If the only memories are imported, we don't need the section. We were already
doing that for tables, functions, etc.
|
|
|
| |
Instead of Many, use a proper Cone Type for the data, as appropriate.
|
|
|
|
|
| |
Since the type annotations are not stored explicitly in Binaryen IR, we have to
validate them in the parser. Implement this and fix a newly-caught incorrect
annotation in the tests.
|
|
|
|
|
|
|
| |
In the upstream spec, `data` has been replaced with a type called `struct`. To
allow for a graceful update in Binaryen, start by introducing "struct" as an
alias for "data". Once users have stopped emitting `data` directly, future PRs
will remove `data` and update the subtyping so that arrays are no longer
subtypes of `struct`.
|
| |
|
|
|
|
|
|
| |
This computes how deep the children of a heap type are. This will be useful in
cone type optimizations, since we want to "normalize" cones: a cone of depth
infinity can just be a cone of the actual maximum depth of existing children, etc.,
and it's simpler to have a single canonical representation to avoid extra work.
|
|
|
|
|
| |
This requires parsing local indices and fixing a bug in `Function::setLocalName`
where it only set up the mapping from index to name and not the mapping from
name to index.
|
|
|
| |
We had cone tests for ref.eq and ref.cast etc. but not for struct.get.
|
|
|
|
|
| |
Parse unary, binary, drop, and select instructions, properly fixing up stacky
code, unreachable code, and multivalue code so it can be represented in Binaryen
IR.
|
|
|
| |
This will be useful in further cone type optimizations.
|
|
|
|
|
| |
When the heap types are not subtypes of each other, but a null is possible, the
intersection exists and is a null. That null must be the shared bottom type.
|
|
|
|
|
|
|
|
|
|
|
| |
A cone type is a PossibleContents that has a base type and a depth, and it
contains all subtypes up to that depth. So depth 0 is an exact type from
before, etc.
This only adds cone type computations when combining types, that is, when we
combine two exact types we might get a cone, etc. This does not yet use the
cone info in all places (like struct gets and sets), and it does not yet define roots
of cone types, all of which is left for later. IOW this is the MVP of cone types that
is just enough to add them + pass tests + test the new functionality.
|
|
|
|
| |
Remove an obsolete error about null characters and test both binary and text
round tripping of a string constant containing an escaped zero byte.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the goal of supporting null characters (i.e. zero bytes) in strings.
Rewrite the underlying interned `IString` to store a `std::string_view` rather
than a `const char*`, reduce the number of map lookups necessary to intern a
string, and present a more immutable interface.
Most importantly, replace the `c_str()` method that returned a `const char*`
with a `toString()` method that returns a `std::string`. This new method can
correctly handle strings containing null characters. A `const char*` can still
be had by calling `data()` on the `std::string_view`, although this usage should
be discouraged.
This change is NFC in spirit, although not in practice. It does not intend to
support any particular new functionality, but it is probably now possible to use
strings containing null characters in at least some cases. At least one parser
bug is also incidentally fixed. Follow-on PRs will explicitly support and test
strings containing nulls for particular use cases.
The C API still uses `const char*` to represent strings. As strings containing
nulls become better supported by the rest of Binaryen, this will no longer be
sufficient. Updating the C and JS APIs to use pointer, length pairs is left as
future work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Parse folded expressions as described in the spec:
https://webassembly.github.io/spec/core/text/instructions.html#folded-instructions.
The old binaryen parser _only_ parses folded expressions, and furthermore
requires them to be folded such that a parent instruction consumes the values
produced by its children and only those values. The standard format is much more
general and allows folded instructions to have an arbitrary number of children
independent of dataflow.
To prevent the rest of the parser from having to know or care about the
difference between folded and unfolded instructions, parse folded instructions
after their children have been parsed. This means that a sequence of
instructions is always parsed in the order they would appear in a binary no
matter how they are folded (or not folded).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we treated each local index as a location, and every local.set to
that index could be read by every local.get. With this we connect only
relevant sets to gets.
Practically speaking, this removes LocalLocation which is what was just
described, and instead there is ParamLocation for incoming parameter
values. And local.get/set use normal ExpressionLocations to connect a
set to a get.
I was worried this would be slow, since computing LocalGraph takes time,
but it actually more than makes up for itself on J2Wasm and we are faster
actually rocket I guess since we do less updating after local.sets.
This makes a noticeable change on the J2Wasm binary, and perhaps will
help with benchmarks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These types, `none`, `nofunc`, and `noextern` are uninhabited, so references to
them can only possibly be null. To simplify the IR and increase type precision,
introduce new invariants that all `ref.null` instructions must be typed with one
of these new bottom types and that `Literals` have a bottom type iff they
represent null values. These new invariants requires several additional changes.
First, it is now possible that the `ref` or `target` child of a `StructGet`,
`StructSet`, `ArrayGet`, `ArraySet`, or `CallRef` instruction has a bottom
reference type, so it is not possible to determine what heap type annotation to
emit in the binary or text formats. (The bottom types are not valid type
annotations since they do not have indices in the type section.)
To fix that problem, update the printer and binary emitter to emit unreachables
instead of the instruction with undetermined type annotation. This is a valid
transformation because the only possible value that could flow into those
instructions in that case is null, and all of those instructions trap on nulls.
That fix uncovered a latent bug in the binary parser in which new unreachables
within unreachable code were handled incorrectly. This bug was not previously
found by the fuzzer because we generally stop emitting code once we encounter an
instruction with type `unreachable`. Now, however, it is possible to emit an
`unreachable` for instructions that do not have type `unreachable` (but are
known to trap at runtime), so we will continue emitting code. See the new
test/lit/parse-double-unreachable.wast for details.
Update other miscellaneous code that creates `RefNull` expressions and null
`Literals` to maintain the new invariants as well.
|
|
|
| |
Fixes emscripten-core/emscripten#17988
|
|
|
|
|
|
|
|
| |
The previous code was making emscripten-specific assumptions about
imports basically all coming from the `env` module.
I can't find a way to make this backwards compatible so may do a
combined roll with the emscripten-side change:
https://github.com/emscripten-core/emscripten/pull/17806
|
|
|
| |
These are only needed for the metadata extraction in emcc.
|
|
|
|
|
| |
Annotations on array.get and array.set were not being counted and the code could
generally be simplified since `count` already ignores types that don't need to
be counted.
|
|
|
|
|
| |
The emscripten side is a little tricky but I've got some tests passing.
Currently blocked on:
https://github.com/emscripten-core/emscripten/issues/17969
|
|
|
|
|
|
|
| |
This is a pretty subtle point that was missed in #4811 - we need to first visit the
child, then compute the size, as the child may alter that size.
Found by the fuzzer.
|
|
|
|
|
| |
These comparisons were showing up as warnings with the latest
emcc/clang. I only noticed because I ran `ninja` to build everything
whereas we only test the `binaryen` target under emscripten normally.
|
| |
|
|
|
|
| |
We ignored only unreachable conditions, but we must ignore the arms as well,
or else we could error.
|
|
|
|
| |
Just an automatic test that A has an intersection with A + B for the contents
in our unit test.
|
|
|
|
| |
Avoid manually doing bitshifts etc. - leave combining to the core hash
logic, which can do a better job.
|
|
|
|
|
|
|
| |
These just add more test coverage of situations I've seen during some
recent debugging, that I realized we lack coverage for.
The first function here would have detected the bug fixed in #5089
|
|
|
|
|
| |
We compared types and not heap types, so a difference in nullability
confused us. But at that point in the code, we've ruled out nulls, so we
should focus on heap types only.
|
|
|
|
| |
This is the case for dynamic linking where the segment offset are
derived from he `__memory_base` import.
|
|
|
|
|
|
|
|
|
| |
This moves the logic to add connections from signatures to functions from the top level
into the RefFunc logic. That way we only add those connections to functions that
actually have a RefFunc, which avoids us thinking that a function without one can be
reached by a call_ref of its type.
Has a small but non-zero benefit on j2wasm.
|
|
|
|
| |
If the PossibleContents for the two sides have no possible intersection then the
result must be 0.
|
|
|
|
|
|
|
| |
Emit call_ref instructions with type annotations and a temporary opcode. Also
implement support for parsing optional type annotations on call_ref in the text
and binary formats. This is part of a multi-part graceful update to switch
Binaryen and all of its users over to using the type-annotated version of
call_ref without there being any breakage.
|
|
|
| |
Fixes #5041
|
|
|
|
|
|
| |
The GC spec has been updated to have heap type annotations on call_ref and
return_call_ref. To avoid breaking users, we will have a graceful, multi-step
upgrade to the annotated version of call_ref, but since return_call_ref has no
users yet, update it in a single step.
|
|
|
|
|
|
|
| |
Previously when we parsed `string.const` payloads in the text format we were
using the text strings directly instead of un-escaping them. Fix that parsing,
and while we're editing the code, also add support for the `\r` escape allowed
by the spec. Remove a spurious nested anonymous namespace and spurious `static`s
in Print.cpp as well.
|
|
|
|
|
|
| |
Similar to ref.cast slightly, but simpler.
Also update some TODO text.
|
| |
|
|
|
| |
This lets that pass optimize 64-bit offsets on memory64 loads and stores.
|
|
|
|
|
|
|
|
|
| |
floating points (#5034)
```
(-x) + y -> y - x
x + (-y) -> x - y
x - (-y) -> x + y
```
|
|
|
| |
This finalizes the multi memories feature introduced in #4968.
|
|
|
| |
Also fix some formatting issue in the file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recently we added logic to ignore effects that don't "escape" past the function call.
That is, e.g. local.set only affects the current function scope, and once the call stack
is unwound it no longer matters as an effect. This moves that logic to a shared place,
and uses it in the core Vacuum logic.
The new constructor in EffectAnalyzer receives a function and then scans it as
a whole. This works just like e.g. scanning a Block as a whole (if we see a break in
the block, that has an effect only inside it, and the Block + children doesn't have a
branch effect).
Various tests are updated so they don't optimize away trivially, by adding new
return values for them.
|
|
|
|
| |
(#5038)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
due to timeout (#5039)
I think this simplifies the logic behind what we consider to trap. Before we had kind of
a hack in visitLoop that now has a more clear reasoning behind it: we consider as
trapping things that trap in all VMs all the time, or will eventually. So a single allocation
doesn't trap, but an unbounded amount can, and an infinite loop is considered to
trap as well (a timeout in a VM will be hit eventually, somehow).
This means we cannot optimize way a trivial infinite loop with no effects in it,
while (1) {}
But we can optimize it out in trapsNeverHappen mode. In any event, such a loop
is not a realistic situation; an infinite loop with some other effect in it, like a call to
an import, will not be optimized out, of course.
Also clarify some other things regarding traps and trapsNeverHappen following
recent discussions in https://github.com/emscripten-core/emscripten/issues/17732
Specifically, TNH will never be allowed to remove calls to imports.
|