| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
If all the fields of a struct.new are defaultable, see if replacing it with a
struct.new_default preserves the behavior, and reduce that way if so.
Also add a missing --closed-world to the --remove-unused-types
invocation. Without that, it was erroring and not working, which I
noticed when testing this. The test also checks that.
|
|
|
|
|
|
| |
Update Builder and IRBuilder makeStructGet and makeStructSet functions
to require the memory order to be explicitly supplied. This is slightly
more verbose, but will reduce the chances that we forget to properly
consider synchronization when implementing new features in the future.
|
|
|
|
|
|
|
|
|
|
| |
The UBSan builder started failing with an error about a misaligned store
in wasm-ctor-eval.cpp. The store was already done via `memcpy` to avoid
alignment issues, but apparently this is no longer enough. Use `void*`
as the destination type to further avoid giving the impression of
guaranteed alignment.
Also fix UB when executing std::abs on minimum negative integers in
literal.cpp.
|
|
|
|
|
|
|
| |
It turns out that #7165 was not enough because we had a second (!?)
list of tests to ignore, and also a condition. Move all that to the shared
location as well, continuing that PR.
Also remove simd.wast from the list, as that issue has been fixed.
|
|
|
| |
We always use the new WAT parser now.
|
|
|
|
|
|
|
|
|
| |
The flake8 we were running on CI was too old and began giving spurious
errors about the uninterpreted contents of f-strings. Update to the
latest flake8 and fix all the new errors, including the previously
incorrect comment syntax in the .flake8 file.
Also remove scripts/storage.py, since it didn't seem to be used for
anything we currently need.
|
|
|
|
|
| |
The coverage CI builder was failing because of a spurious error about a
variable being possibly used uninitialized. Fix the warning by
initializing the variable to 0.
|
|
|
|
|
|
|
|
|
| |
GlobalStructInference optimizes gets of immutable fields of structs that
are only ever instantiated to initialize immutable globals. Due to all
the immutability, it's not possible for the optimized reads to
synchronize with any writes via the accessed memory, so we just need to
be careful to replace removed seqcst gets with seqcst fences.
As a drive-by, fix some stale comments in gsi.wast.
|
|
|
|
|
|
|
|
|
| |
Conservatively avoid introducing synchronization bugs by not optimizing
atomic struct.gets at all in GUFA. It is possible that we could be more
precise in the future.
Also remove obsolete logic dealing with the types of null values as a
drive-by. All null values now have bottom types, so the type mismatch
this code checked for is impossible.
|
|
|
|
|
| |
The list of tests, and which tests cannot be fuzzed, will be useful in
a future PR that uses it for ClusterFuzz as well. This just moves things
out so they are reusable.
|
|
|
|
|
|
|
|
|
|
|
| |
During function reconstruction, a walker iterates thru each instruction
of a function, incrementing a counter to find matching sequences. As a
result, the sequences of a function must be sorted by smallest start
index, otherwise reconstruction will miss outlining a repeat sequence.
I considered making a test for this commit, but the sort wasn't needed
until the tests started running on GitHub infra. I'm not sure what
specific architecture is causing the discrepancy in vector ordering, but
let's introduce the sort to be safe.
|
|
|
|
| |
This handles the case where WASM_BIGINT is enabled by default by emcc.
Binaryen's JS bindings do not currently work with bigint (see #7163)
|
|
|
|
|
| |
GTO removes fields that are never read and also removes sets to those
fields. Update the pass to add a seqcst fence when removing a seqcst set
to preserve its effect on the global order of seqcst operations.
|
|
|
|
|
|
|
|
|
| |
Sequentially consistent gets that are optimized out need to have seqcst
fences inserted in their place to keep the same effect on global
ordering of sequentially consistent operations. In principle, acquire
gets could be similarly optimized with an acquire fence in their place,
but acquire fences synchronize more strongly than acquire gets, so this
may have a negative performance impact. For now, inhibit optimization of
acquire gets.
|
|
|
|
|
|
|
| |
1. Error on retrying due to a wasm-opt issue, locally (in production, we
don't want to error on ClusterFuzz).
2. Move some asserts from test_run_py to the helper generate_testcases
(so that the asserts happen in all callers).
|
|
|
|
|
|
|
|
|
|
|
| |
Heap2Local replaces gets and sets of non-escaping heap allocations with
gets and sets of locals. Since the accessed data does not escape, it
cannot be used to directly synchronize with other threads, so this
optimization is generally safe even in the presence of shared structs
and atomic struct accesses. The only caveat is that sequentially
consistent accesses additionally participate in the global ordering of
sequentially consistent operations, and that effect on the global
ordering cannot be removed. Insert seqcst fences to maintain this global
synchronization when removing sequentially consistent gets and sets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement support for both sequentially consistent and acquire-release
variants of `struct.atomic.get` and `struct.atomic.set`, as proposed by
shared-everything-threads. Introduce a new `MemoryOrdering` enum for
describing different levels of atomicity (or the lack thereof). This new
enum should eventually be adopted by linear memory atomic accessors as
well to support acquire-release semantics, but for now just use it in
`StructGet` and `StructSet`.
In addition to implementing parsing and emitting for the instructions,
validate that shared-everything is enabled to use them, mark them as
having synchronization side effects, and lightly optimize them by
relaxing acquire-release accesses to non-shared structs to normal,
unordered accesses. This is valid because such accesses cannot possibly
synchronize with other threads. Also update Precompute to avoid
optimizing out synchronization points.
There are probably other passes that need to be updated to avoid
incorrectly optimizing synchronizing accesses, but identifying and
fixing them is left as future work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We normally like to move brs after ifs into the if, when in a loop:
(loop $loop
(if
..
(unreachable)
(code)
)
(br $loop)
)
=>
(loop $loop
(if
..
(unreachable)
(block
(code)
(br $loop) ;; moved in
)
)
)
However this may be invalid to do if the if condition is unreachable, as
then one arm may be concrete (`code` in the example could be an `i32`,
for example). As this is dead code anyhow, leave it for DCE.
|
|
|
|
|
|
|
| |
(#7154)
With this option, each time we reduce we save a file w.wasm.17 or such,
incrementing that counter. This is useful when debugging the reducer, but
might have more uses.
|
|
|
|
|
|
|
|
|
|
| |
* Add a new "sleep" fuzzer import, that does a sleep for some ms.
* Add JSPI support in fuzz_shell.js. This is in the form of commented-out async/await
keywords - commented out so that normal fuzzing is not impacted. When we want
to fuzz JSPI, we uncomment them. We also apply the JSPI operations of marking
imports and exports as suspending/promising.
JSPI fuzzing is added to both fuzz_opt.py and ClusterFuzz's run.py.
|
| |
|
|
|
|
| |
Now that #7149 added support for parsing block parameters, we can run
additional spec tests that previously failed.
|
|
|
| |
The PR was accidentally merged without these fixes included.
|
|
|
|
|
| |
This makes Precompute about 5% faster on a WasmGC binary.
Inspired by #6931.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since multivalue was standardized, WebAssembly has supported not only
multiple results but also an arbitrary number of inputs on control flow
structures, but until now Binaryen did not support control flow input.
Binaryen IR still has no way to represent control flow input, so lower
it away using scratch locals in IRBuilder. Since both the text and
binary parsers use IRBuilder, this gives us full support for parsing
control flow inputs.
The lowering scheme is mostly simple. A local.set writing the control
flow inputs to a scratch local is inserted immediately before the
control flow structure begins and a local.get retrieving those inputs is
inserted inside the control flow structure before the rest of its body.
The only complications come from ifs, in which the inputs must be
retrieved at the beginning of both arms, and from loops, where branches
to the beginning of the loop must be transformed so their values are
written to the scratch local along the way.
Resolves #6407.
|
|
|
| |
Fixes #7145
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Value types were previously represented internally as either enum values
for "basic," i.e. non-reference, non-tuple types or pointers to
`TypeInfo` structs encoding either references or tuples. Update the
representation of reference types to use one bit to encode nullability
and the rest of the bits to encode the referenced heap type. This allows
canonical reference types to be created with a single logical or rather
than by taking a lock on a global type store and doing a hash map lookup
to canonicalize.
This change is a massive performance improvement and dramatically
improves how performance scales with threads because the removed lock
was highly contended. Even with a single core, the performance of an O3
optimization pipeline on a WasmGC module improves by 6%. With 8 cores,
the improvement increases to 29% and with all 128 threads on my machine,
the improvement reaches 46%.
The full new encoding of types is as follows:
- If the type ID is within the range of the basic types, the type is
the corresponding basic type.
- Otherwise, if bit 0 is set, the type is a tuple and the rest of the
bits are a canonical pointer to the tuple.
- Otherwise, the type is a reference type. Bit 1 determines the
nullability and the rest of the bits encode the heap type.
Also update the encodings of basic heap types so they no longer use the
low two bits to avoid conflicts with the use of those bits in the
encoding of types.
|
|
|
|
|
|
|
|
|
|
|
| |
The new Two fuzz testcase handler generates two wasm files and
then runs them in V8, linking them using JS. It then optimizes at least one of
the two and runs it again, and checks for differences.
This is similar to the Split fuzzer for wasm-split, but the two wasm files
are arbitrary and not the result of splitting.
Also lower CtorEval priority a little. It is not very important.
|
|
|
|
|
|
|
|
|
| |
Co-locate the declaration and implementation of TypeGraphWalkerBase and
its subtypes in wasm-type.cpp and simplify the implementation. Remove
the preVisit and postVisit tasks for both Types and HeapTypes since
overriding scanType and scanHeapType is sufficient for all users. Stop
scanning the HeapTypes in reference types because a follow-on change
(#7142) will make that much more complicated, and it turns out that it
is not necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#7138)
If we see a StructGet with no content (the type it reads from has no writes)
then we can make it unreachable. The old code literally just changed the type
to unreachable, which would later work out with refinalization - but only if
the StructGet's ref was unreachable. But it is possible for this situation to
occur without that, and if so, this hit the validation error "can't have an
unreachable node without an unreachable child".
To fix this, merge all code paths that handle "impossible" situations, which
simplifies things, and add this situation.
This uncovered an existing bug where we noted default values of refs, but
not non-refs (which could lead us to think that a field of a struct that only
was ever created by struct.new_default, was never created at all). Fixed as
well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to call-export*, these imports call a wasm function from outside the
module. The difference is that we send a function reference for them to call
(rather than an export index).
This gives more coverage, first by sending a ref from wasm to JS, and also
since we will now try to call anything that is sent. Exports, in comparison,
are filtered by the fuzzer to things that JS can handle, so this may lead to
more traps, but maybe also some new situations. This also leads to adding
more logic to execution-results.h to model JS trapping properly.
fuzz_shell.js is refactored to allow sharing code between call-export* and
call-ref*.
|
|
|
|
|
|
|
|
|
|
| |
LLVM recently split the bulk-memory-opt feature out from bulk-memory,
containing just memory.copy and memory.fill. This change follows that,
making bulk-memory-opt also enabled when all of bulk-memory is enabled.
It also introduces call-indirect-overlong following LLVM, but ignores
it, since Binaryen has always allowed the encoding (i.e. command
line flags enabling or disabling the feature are accepted but
ignored).
|
|
|
|
|
|
|
|
|
| |
When we refactored how the name section is read, we accidentally left an
old warning about invalid field name indices in place. The old warning
code compares the type index from the names section to the size of the
parsed type vector to determine if the index is out-of-bounds. Now that
we parse the name section before the type section, this is no longer
correct. Delete the old warning; we already have a new, correct warning
for out-of-bound indices when we parse the type section.
|
|
|
|
|
|
|
|
|
|
|
| |
This sends --closed-world to wasm-opt from the fuzzer, when we use that
flag (before we just used it on optimizations, but not fuzz generation). And
TranslateToFuzzReader now stores a boolean about whether we are in closed-
world mode or not.
This has no effect so far, and is a refactoring for a later PR, where we must
generate code differently based on whether we are in closed-world mode
or not.
|
|
|
|
|
|
|
|
|
| |
This pass is now just part of Memory64Lowering.
Once this lands we can remove the `--table64-lowering` flag from
emscripten. Because I've used an alias here there will be some interim
period where emscripten will run this pass twice since it passed both
flags. However, this will only be temporary and that second run will be
a no-op since the first one will remove the feature.
|
|
|
|
| |
In open world we must assume that a funcref that escapes to the outside
might be called.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move all state relevant to reading source maps out of WasmBinaryReader
and into a new utility, SourceMapReader. This is a prerequisite for
parallelizing the parsing of function bodies, since the source map
reader state is different at the beginning of each function.
Also take the opportunity to simplify the way we read source maps, for
example by deferring the reading of anything but the position of a debug
location until it will be used and by using `std::optional` instead of
singleton `std::set`s to store function prologue and epilogue debug
locations.
|
|
|
|
|
|
|
|
|
|
| |
While parsing a binary file, there may be pops that need to be fixed up
even if EH is not (yet) enabled because the target features section has
not been parsed yet. Previously `EHUtils::handleBlockNestedPops` did not
do anything if EH was not enabled, so the binary parser would fail to
fix up pops in that case. Add an optional parameter to override this
behavior so the parser can fix up pops unconditionally.
Fixes #7127.
|
|
|
|
|
|
|
|
|
|
| |
RemoveUnusedBrs sinks blocks into If arms when those arms contain
branches to the blocks and the other arm and condition do not. Now that
we type Ifs with unreachable conditions as unreachable, it is possible
for the If arms to have a different type than the block that would be
sunk, so sinking the block would produce invalid IR. Fix the problem by
never sinking blocks into Ifs with unreachable conditions.
Fixes #7128.
|
|
|
|
| |
Even if the size is 0, if the offset is > 0 then we should trap.
|
|
|
|
|
|
| |
Rename the opcode values in wasm-binary.h to better match the names of
the corresponding instructions. This also makes these names match the
scheme used by the rest of the basic unary operations, allowing for more
macro use in the binary reader.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
IRBuilder is a utility for turning arbitrary valid streams of Wasm
instructions into valid Binaryen IR. It is already used in the text
parser, so now use it in the binary parser as well. Since the IRBuilder
API for building each intruction requires only the information that the
binary and text formats include as immediates to that instruction, the
parser is now much simpler than before. In particular, it does not need
to manage a stack of instructions to figure out what the children of
each expression should be; IRBuilder handles this instead.
There are some differences between the IR constructed by IRBuilder and
the IR the binary parser constructed before this change. Most
importantly, IRBuilder generates better multivalue code because it
avoids eagerly breaking up multivalue results into individual components
that might need to be immediately reassembled into a tuple. It also
parses try-delegate more correctly, allowing the delegate to target
arbitrary labels, not just other `try`s. There are also a couple
superficial differences in the generated label and scratch local names.
As part of this change, add support for recording binary source
locations in IRBuilder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the only Ifs that were typed unreachable were those in which
both arms were unreachable and those in which the condition was
unreachable that would have otherwise been typed none. This caused
problems in IRBuilder because Ifs with unreachable conditions and
value-returning arms would have concrete types, effectively hiding the
unreachable condition from the logic for dropping concretely typed
expressions preceding an unreachable expression when finishing a scope.
Relax the conditions under which an If can be typed unreachable so that
all Ifs with unreachable conditions or two unreachable arms are typed
unreachable. Propagating unreachability more eagerly this way makes
various optimizations of Ifs more powerful. It also requires new
handling for unreachable Ifs with concretely typed arms in the Printer
to ensure that printed wat remains valid.
Also update Unsubtyping, Flatten, and CodeFolding to account for the
newly unreachable Ifs.
|
|
|
| |
This feature is depended on by our ClusterFuzz integration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CodeFolding previously only worked on blocks that did not produce
values. It worked on Ifs that produced values, but only by accident; the
logic for folding matching tails was not written to support tails
producing concrete values, but it happened to work for Ifs because
subsequent ReFinalize runs fixed all the incorrect types it produced.
Improve the power of the optimization by explicitly handling tails that
produce concrete values for both blocks and ifs. Now that the core logic
handles concrete values correctly, remove the unnecessary ReFinalize
run.
Also remove the separate optimization of Ifs with identical arms; this
optimization requires ReFinalize and is already performed by
OptimizeInstructions.
|
|
|
|
|
|
|
|
|
| |
The two files are then linked and run by fuzz_shell.js (we had this functionality
already in order to fuzz wasm-split). By adding multiple build and run commands
of both the primary and secondary wasm files, we can end up with multiple
instances of two different wasm files that call between themselves.
To help testing, add a script that extracts the wasm files from the testcase. This
may also be useful in the future for testcase reduction.
|
|
|
|
|
|
|
|
| |
The LUB of sibling types is their common supertype, but after the
sibling types are merged, their LUB is the merged type, which is a
strict subtype of the previous LUB. This means that merging sibling
types causes `selects` to have stale types when the two select arms
previously had the two merged sibling types. To fix any potential stale
types, ReFinalize after merging sibling types.
|
| |
|
|
|
|
|
|
| |
(#7116)
Lower away saturating fptoint operations when we know we are using
emscripten.
|
|
|
|
|
| |
This was never right for over a decade, and just never used I suppose... it should
have been called "take" since it grabbed data from the other item and then set
that other item to empty. Fix it so it swaps properly.
|