| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
This can help in rare cases in MVP wasm, say for the return value of a block. But for
wasm GC it is very important due to casts.
Similar logic was added as part of #5194 for SimplifyLocals. It should probably have
been in a separate PR then. This does the right thing for RedundantSetElimination,
as a separate PR. Full tests will appear in that later PR (it is not really possible to test
the GC side yet - we need the logic in the later PR that actually switches to a more
refined local index when available).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds C APIs to inspect compound struct, array and signature heap types:
Obtain field types, field packed types and field mutabilities of struct types:
BinaryenStructTypeGetNumFields (to iterate)
BinaryenStructTypeGetFieldType
BinaryenStructTypeGetFieldPackedType
BinaryenStructTypeIsFieldMutable
Obtain element type, element packed type and element mutability of array types:
BinaryenArrayTypeGetElementType
BinaryenArrayTypeGetElementPackedType
BinaryenArrayTypeIsElementMutable
Obtain parameter and result types of signature types:
BinaryenSignatureTypeGetParams
BinaryenSignatureTypeGetResults
|
|
|
|
|
|
|
| |
We just checked if the new type we prefer (when switching a local to a more
refined one in #5194) is different than the old type. But that check at the end
must check it is a subtype as well.
Diff without whitespace is smaller.
|
|
|
|
|
|
|
|
|
| |
This sorts globals by their usage (and respecting dependencies). If the module
has very many globals then using smaller LEBs can matter.
If there are fewer than 128 globals then we cannot reduce size, and the pass
exits early (so this pass will not slow down MVP builds, which usually have just
1 global, the stack pointer). But with wasm GC it is common to use globals for
vtables etc., and often there is a very large number of them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
possible (#5194)
(local.set $refined (cast (local.get $plain)))
..
.. (local.get $plain) .. ;; we can change this to read from $refined
By using the more refined type we may be able to eliminate casts later.
To do this, look at the fallthrough value (so we can look through a cast or a block
value - this is the reason for the small wasm2js improvements in tests), and also
extend the code that picks which local index to read to look at types (previously
we just ignored any pairs of locals with different types).
|
|
|
|
|
|
|
|
|
|
| |
E.g.
Atomic operation (atomics are disabled)
=>
Atomic operations require threads [--enable-threads]
|
|
|
|
|
|
|
|
|
|
| |
Adds a multi-memories lowering pass that will create a single combined memory from the memories added to the module. This pass assumes that each memory is configured the same (type, shared).
This pass also:
- replaces existing memory.size instructions with a custom function that returns the size of each memory as if they existed independently
- replaces existing memory.grow instructions with a custom function, using global offsets to track the page size of each memory so data doesn't overlap in the singled combined memory
- adjusts the offsets of active data segments
- adjusts the offsets of Loads/Stores
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the pass only pushed past an if or a br_if. This does the same but into an
if arm. On Wasm GC for example this can perform allocation sinking:
function foo() {
x = new A();
if (..) {
use(x);
}
}
=>
function foo() {
if (..) {
x = new A(); // this moved
use(x);
}
}
The allocation won't happen if we never enter the if. This helps wasm MVP too,
and in fact some existing tests benefit.
|
|
|
|
|
|
|
|
|
| |
If a heap type only ever appears as the result of a read, we must include it in
the analysis in ModuleUtils, even though it isn't written in the binary format.
Otherwise analyses using ModuleUtils can error on not finding all types in the
list of types.
Fixes #5180
|
|
|
|
|
|
|
| |
The fallthrough there is trickier because the value is evaluated before the condition.
Unlike other fallthroughs, the value is not last, so we need to check if the condition
(which is after it) interferes with it.
|
|
|
|
|
| |
This makes the logic symmetric and easier to read.
Measuring speed, this seems identical to before, so that concern seems fine.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Adds heap type utility to the C API:
BinaryenHeapTypeIsBasic
BinaryenHeapTypeIsSignature
BinaryenHeapTypeIsStruct
BinaryenHeapTypeIsArray
BinaryenHeapTypeIsSubType
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is safe since we "partially remove" it: we don't move it to a place it might
execute more, but make it possibly execute less. See the new comment for
more details.
Motivated by wasm GC but this can help wasm MVP as well. In both cases
loads from memory can trap, which limits what the VM can do to optimize
them past conditions, but in trapsNeverHappens we can do that at the
toolchain level:
x = read();
if () { .. }
use(x);
=>
if () { .. }
x = read(); // moved to here, and might not execute if the if did a break/return
use(x);
|
|
|
|
|
|
|
|
|
| |
Unlike in the legacy parser, we cannot depend on the folded text format to
determine how many values to return, so we determine that solely based on the
current function context.
To handle multivalue return correctly, fix a bug in which we could synthesize
new `unreachable`s and place them before existing unreachable instructions (like
returns) at the end of instruction sequences.
|
| |
|
| |
|
|
|
|
|
| |
The fuzzer started to fail on the recent externalize/internalize test
that was added in #5175 as we lack interpreter support. Move that to a separate
file and ignore it in the fuzzer for now.
|
|
|
|
|
|
|
|
|
|
| |
Add parsing functions for `memarg`s, the offset and align fields of load and
store instructions. These fields are interesting because they are lexically
reserved words that need to be further parsed to extract their actual values. On
top of that, add support for parsing all of the load and store instructions.
This required fixing a buffer overflow problem in the generated parser code and
adding more information to the signatures of the SIMD load and store
instructions. `SIMDLoadStoreLane` instructions are particularly interesting
because they may require backtracking to parse correctly.
|
|
|
|
|
| |
These are encoded as RefAs operations, and we have optimizations that assume those
trap on null, but Externalize/Internalize do not. Skip them there to avoid an error on the
type being incorrect later.
|
| |
|
|
|
|
|
| |
Also add the ability to parse memory indexes to correctly handle the
multi-memory versions of these instructions. Add and use a conversion from
`Result` to `MaybeResult` as well.
|
|
|
|
|
|
|
| |
Parse 32-bit and 64-bit memories, including their initial and max sizes. Shared
memories are left to a follow-up PR. The memory abbreviation that includes
inline data is parsed, but the associated data segment is not yet created. Also
do some minor simplifications in neighboring helper functions for other kinds of
module elements.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When we read from a struct/array using a cone type, read from the types in the cone
and nothing else. Previously we used the declared type in the wasm, which might be
larger (both in the base type and the depth). Likewise, in a write.
To do this, this extends ConeReadLocation with a depth (previously the depth there
was assumed to be infinite, and now it is to a potentially limited depth).
After this we are fully utilizing cone types in GUFA, as the test changes show (or at
least I can't think of any other uses of cones).
|
|
|
| |
The C API still returned non nullable types for `dataref` (`ref data` instead of `ref null data`) and `i31ref` (`ref i31` instead of `ref null i31`). This PR aligns with the current state of the GC proposal, making them nullable when obtained via the C API.
|
|
|
| |
Including all `SIMDExtract`, `SIMDReplace`, `SIMDShuffle` expressions.
|
|
|
|
| |
Also add some missing error checking for the similar local instructions and make
some neighboring styling more consistent.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we have a cone type, we are able to represent in PossibleContents the
natural content of a wasm location: a type or any of its subtypes. This allows us to
enforce the wasm typing rules, that is, to filter the data arriving at a location by the
wasm type of the location.
Technically this could be unnecessary if we had full implementations of flowFoo
and so forth, that is, tailored code for each wasm expression that makes sure we
only contain and flow content that fits in the wasm type. Atm we don't have that,
and until the wasm spec stabilizes it's probably not worth the effort. Instead,
simply filter based on the type, which gives the same result (though it does take
a little more work; I measured it at 3% or so of runtime).
While doing so normalize cones to their actual maximum depth, which simplifies
things and will help more later as well.
|
|
|
| |
Adds `BinaryenHeapTypeNone`, `BinaryenHeapTypeNoext` and `BinaryenHeapTypeNofunc` to obtain the bottom heap types. Also adds `BinaryenHeapTypeIsBottom` to test whether a given heap type is a bottom type, and `BinaryenHeapTypeGetBottom` to obtain the respective bottom type given a heap type.
|
|
|
| |
Test that we can still parse the old annotated form as well.
|
|
|
|
|
|
|
|
|
| |
`array` is the supertype of all defined array types and for now is a subtype of
`data`. (Once `data` becomes `struct` this will no longer be true.) Update the
binary and text parsing of `array.len` to ignore the obsolete type annotation
and update the binary emitting to emit a zero in place of the old type
annotation and the text printing to print an arbitrary heap type for the
annotation. A follow-on PR will add support for the newer unannotated version of
`array.len`.
|
|
|
|
|
|
| |
As the number of basic heap types has grown, the complexity of the subtype and
LUB calculations has grown as well. To ensure that they are correct, test the
complete matrix of basic types and trivial user-defined types. Fix the subtype
calculation to make string types subtypes of `any` to make the test pass.
|
| |
|
|
|
|
| |
If the only memories are imported, we don't need the section. We were already
doing that for tables, functions, etc.
|
|
|
| |
Instead of Many, use a proper Cone Type for the data, as appropriate.
|
|
|
|
|
| |
Since the type annotations are not stored explicitly in Binaryen IR, we have to
validate them in the parser. Implement this and fix a newly-caught incorrect
annotation in the tests.
|
|
|
|
|
|
|
| |
In the upstream spec, `data` has been replaced with a type called `struct`. To
allow for a graceful update in Binaryen, start by introducing "struct" as an
alias for "data". Once users have stopped emitting `data` directly, future PRs
will remove `data` and update the subtyping so that arrays are no longer
subtypes of `struct`.
|
| |
|
|
|
|
|
|
| |
This computes how deep the children of a heap type are. This will be useful in
cone type optimizations, since we want to "normalize" cones: a cone of depth
infinity can just be a cone of the actual maximum depth of existing children, etc.,
and it's simpler to have a single canonical representation to avoid extra work.
|
|
|
|
|
| |
This requires parsing local indices and fixing a bug in `Function::setLocalName`
where it only set up the mapping from index to name and not the mapping from
name to index.
|
|
|
| |
We had cone tests for ref.eq and ref.cast etc. but not for struct.get.
|
|
|
|
|
| |
Parse unary, binary, drop, and select instructions, properly fixing up stacky
code, unreachable code, and multivalue code so it can be represented in Binaryen
IR.
|
|
|
| |
This will be useful in further cone type optimizations.
|
|
|
|
|
| |
When the heap types are not subtypes of each other, but a null is possible, the
intersection exists and is a null. That null must be the shared bottom type.
|
|
|
|
|
|
|
|
|
|
|
| |
A cone type is a PossibleContents that has a base type and a depth, and it
contains all subtypes up to that depth. So depth 0 is an exact type from
before, etc.
This only adds cone type computations when combining types, that is, when we
combine two exact types we might get a cone, etc. This does not yet use the
cone info in all places (like struct gets and sets), and it does not yet define roots
of cone types, all of which is left for later. IOW this is the MVP of cone types that
is just enough to add them + pass tests + test the new functionality.
|
|
|
|
| |
Remove an obsolete error about null characters and test both binary and text
round tripping of a string constant containing an escaped zero byte.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the goal of supporting null characters (i.e. zero bytes) in strings.
Rewrite the underlying interned `IString` to store a `std::string_view` rather
than a `const char*`, reduce the number of map lookups necessary to intern a
string, and present a more immutable interface.
Most importantly, replace the `c_str()` method that returned a `const char*`
with a `toString()` method that returns a `std::string`. This new method can
correctly handle strings containing null characters. A `const char*` can still
be had by calling `data()` on the `std::string_view`, although this usage should
be discouraged.
This change is NFC in spirit, although not in practice. It does not intend to
support any particular new functionality, but it is probably now possible to use
strings containing null characters in at least some cases. At least one parser
bug is also incidentally fixed. Follow-on PRs will explicitly support and test
strings containing nulls for particular use cases.
The C API still uses `const char*` to represent strings. As strings containing
nulls become better supported by the rest of Binaryen, this will no longer be
sufficient. Updating the C and JS APIs to use pointer, length pairs is left as
future work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Parse folded expressions as described in the spec:
https://webassembly.github.io/spec/core/text/instructions.html#folded-instructions.
The old binaryen parser _only_ parses folded expressions, and furthermore
requires them to be folded such that a parent instruction consumes the values
produced by its children and only those values. The standard format is much more
general and allows folded instructions to have an arbitrary number of children
independent of dataflow.
To prevent the rest of the parser from having to know or care about the
difference between folded and unfolded instructions, parse folded instructions
after their children have been parsed. This means that a sequence of
instructions is always parsed in the order they would appear in a binary no
matter how they are folded (or not folded).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we treated each local index as a location, and every local.set to
that index could be read by every local.get. With this we connect only
relevant sets to gets.
Practically speaking, this removes LocalLocation which is what was just
described, and instead there is ParamLocation for incoming parameter
values. And local.get/set use normal ExpressionLocations to connect a
set to a get.
I was worried this would be slow, since computing LocalGraph takes time,
but it actually more than makes up for itself on J2Wasm and we are faster
actually rocket I guess since we do less updating after local.sets.
This makes a noticeable change on the J2Wasm binary, and perhaps will
help with benchmarks.
|