| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on it (#6357)
Before FUZZ_OPTS was used both when doing --translate-to-fuzz/-ttf to generate the
wasm from the random bytes and also when later running optimizations to generate
a second wasm file for comparison. That is, we ended up doing this, if the opts were -O3:
wasm-opt random.input -ttf -o a.wasm -O3
wasm-opt a.wasm -O3 -o b.wasm
Now we have a pair a.wasm,b.wasm which we can test. However, we have run -O3
on both which is a little silly - the second -O3 might not actually have anything left
to do, which would mean we compare the same wasm to itself.
Worse, this is incorrect, as there are things we need to do only during the
generation phase, like --denan. We need that in order to generate a valid wasm to
test on, but it is "destructive" in itself: when removing NaNs (to avoid nondeterminism)
if replaces them with 0, which is different. As a result, running --denan when
generating the second wasm from the first could lead to different execution in them.
This was always a problem, but became more noticable recently now that DeNaN
modifies SIMD operations, as one optimization we do is to replace a memory.copy
with v128.load + v128.store, and --denan will make sure the loaded value has no
NaNs...
To fix this, separate the generation and optimization phase. Instead of
wasm-opt random.input -ttf -o a.wasm --denan -O3
wasm-opt a.wasm --denan -O3 -o b.wasm
(note how --denan -O3 appears twice), do this:
wasm-opt random.input -ttf -o a.wasm --denan
wasm-opt a.wasm -O3 -o b.wasm
(note how --denan appears in generation, and -O3 in optimization).
|
| |
|
|
|
|
|
|
|
|
|
|
| |
See #5665 #5599, this is an existing issue and we have a workaround for it
using --dce, but it does not always work. I seem to be seeing this in higher
frequency since landing recent fuzzer improvements, so ignore it.
There is some risk of us missing real bugs here (that we validate and V8
does not), but this is a validation error which is not as serious as a difference
in behavior. And this is a long-standing issue that hasn't bitten us yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the [typed
continuations/wasmfx proposal](https://github.com/wasmfx/specfx).
This particular PR adds support for the `cont.new` instruction for creating
continuations, documented [here(https://github.com/wasmfx/specfx/blob/main/proposals/continuations/Overview.md#instructions).
In short, these instructions are of the form `(cont.new $ct)` where `$ct` must
be a continuation type. The instruction takes a single (nullable) function
reference as its argument, which means that the folded representation of the
instruction is of the form `(cont.new $ct (foo ...))`.
Support for the instruction is implemented in both the old and the new wat
parser.
Note that this PR does not implement validation of the new instruction.
|
|
|
|
|
|
|
|
|
| |
We used to fuzz MVP 1/3, all 1/3, and a mixture 1/3, but that gives far too much
priority to the MVP which is increasingly less important. It is also a good idea to
give "all" more priority as that enables more initial content to run (the fuzzer will
discard initial content if it doesn't validate with the features chosen in the current
iteration).
Also (NFC) rename POSSIBLE_FEATURE_OPTS to make the code easier to follow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
One problem was that spec testcases had exports with names that are not
valid to write as JS exports.name. For example an export with a - in the
name would end up as exports.foo-bar etc. Since #6310 that is fixed as
we do not emit such JS (we use the generic fuzz_shell.js script which iterates
over the keys in exports with exports[name]).
Also fix a few trivial fuzzer issues that initial content uncovered:
- Ignore a wat file with invalid utf-8.
- Print string literals in the same way from JS as from C++.
- Enable the stringref flag in V8.
- Remove tag imports (the same as we do for global and function and other imports).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JS engines print i31ref as just a number, so we need a small regex to
standardize the representation (similar to what we do for funcrefs on
the code above).
On the C++ side, make it actually print the i31ref rather than treat it
like a generic reference (for whom we only print "object"). To do that
we must unwrap an externalized i31 as necessary, and add a case for
i31 in the printing logic.
Also move that printing logic to its own function, as it was starting to
get quite long.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We already have passes to legalize i64 imports and exports, which the fuzzer will
run so that we can run wasm files in JS VMs. SIMD and multivalue also pose a
problem as they trap on the boundary. In principle we could legalize them as well,
but that is substantial effort, so instead just prune them: given a wasm module,
remove any imports or exports that use SIMD or multivalue (or anything else that
is not legal for JS).
Running this in the fuzzer will allow us to not skip running v8 on any testcase we
enable SIMD and multivalue for.
(Multivalue is allowed in newer VMs, so that part of this PR could be removed
eventually.)
Also remove the limitation on running v8 with multimemory (v8 now supports
that).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We had two JS files that could run a wasm file for fuzzing purposes:
* --emit-js-shell, which emitted a custom JS file that runs the wasm.
* scripts/fuzz_shell.js, which was a generic file that did the same.
Both of those load the wasm and then call the exports in order and print out
logging as it goes of their return values (if any), exceptions, etc. Then the
fuzzer compares that output to running the same wasm in another VM, etc. The
difference is that one was custom for the wasm file, and one was generic. Aside
from that they are similar and duplicated a bunch of code.
This PR improves things by removing 1 and using 2 in all places, that is, we
now use the generic file everywhere.
I believe we added 1 because we thought a generic file can't do all the
things we need, like know the order of exports and the types of return values,
but in practice there are ways to do those things: The exports are in fact
in the proper order (JS order of iteration is deterministic, thankfully), and
for the type we don't want to print type internals anyhow since that would
limit fuzzing --closed-world. We do need to be careful with types in JS (see
notes in the PR about the type of null) but it's not too bad. As for the types
of params, it's fine to pass in null for them all anyhow (null converts to a
number or a reference without error).
|
|
|
|
|
|
|
|
|
|
|
| |
Fuzzing Asyncify has a significant cost both in terms of the complexity in
the fuzzer and the slowness of the fuzzing. In practice it was useful years ago
when Asyncify was written but hasn't found anything for a while, and Asyncify
is really deprecated given JSPI. For all those reasons, remove it from the fuzzer.
We do still have lots of normal coverage of asyncify in lit tests, unit tests, and
the Emscripten test suite.
Removing this will also make future improvements to the fuzzer simpler.
|
|
|
|
|
| |
Users can put files in ./fuzz and they will be fuzzed with high priority.
Docs in source and https://github.com/WebAssembly/binaryen/wiki/Fuzzing#helper-scripts
|
| |
|
|
|
|
|
|
|
|
| |
This adds support `CFGWalker` for the new EH instructions (`try_table`
and `throw_ref`). `CFGWalker` is used by many different passes, but in
the same vein as #3494, this adds tests for `RedundantSetElimination`
pass. `rse-eh.wast` file is created from translated and simplified
version of `rse-eh-old.wast`, but many tests were removed because we
don't have special `catch` block or `delegate` anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This translates the old Phase 3 EH instructions, which include `try`,
`catch`, `catch_all`, `delegate`, and `rethrow`, into the new EH
instructions, which include `try_table` (with `catch` / `catch_ref` /
`catch_all` / `catch_all_ref`) and `throw_ref`, passed at the Oct 2023
CG meeting.
This translator can be used as a standalone tool by users of the
previous EH toolchain to generate binaries for the new spec without
recompiling, and also can be used at the end of the Binaryen pipeline to
produce binaries for the new spec while the end-to-end toolchain
implementation for the new spec is in progress.
While the goal of this pass is not optimization, this tries to a little
better than the most naive implementation, namely by omitting a few
instructions where possible and trying to minimize the number of
additional locals, because this can be used as a standalone translator
or the last stage of the pipeline while we can't post-optimize the
results because the whole pipeline (-On) is not ready for the new EH.
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the [typed continuations proposal](https://github.com/wasmfx/specfx).
This particular PR adds support for the `resume` instruction. The most notable missing feature is validation, which is not implemented, yet.
|
|
|
|
|
|
|
|
|
| |
This moves tests for the old EH spec to `exception-handling-old.wast`
and moves the new `exnref` test into `exception-handling.wast`, onto
which I plan to add more tests for the new EH spec.
The primary reason for splitting the files is I plan to exclude the new
EH test from the fuzzing while the new spec's implementation is in
progress, and I don't want to exclude the old EH tests altogether.
|
|
|
|
|
|
|
|
| |
Once support for tuple.extract lands in the new WAT parser, this arity immediate
will let the parser determine how many values it should pop off the stack to
serve as the tuple operand to `tuple.extract`. This will usually coincide with
the arity of a tuple-producing instruction on top of the stack, but in the
spirit of treating the input as a proper stack machine, it will not have to and
the parser will still work correctly.
|
|
|
|
|
|
|
|
|
|
| |
Previously, the number of tuple elements was inferred from the number of
s-expression children of the `tuple.make` expression, but that scheme would not
work in the new wat parser, where s-expressions are optional and cannot be
semantically meaningful.
Update the text format to take the number of tuple elements (i.e. the tuple
arity) as an immediate. This new format will be able to be implemented in the
new parser as follow-on work.
|
|
|
|
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the [typed continuations proposal](https://github.com/wasmfx/specfx).
This PR adds continuation types, of the form `(cont $foo)` for some function type `$foo`.
The only notable changes affecting existing code are the following:
- This is the first `HeapType` which has another `HeapType` (rather than, say, a `Type`) as its immediate child. This required fixes to certain traversals that have a flag for being at the toplevel of a type.
- Some shared logic for parsing `HeapType`s has been factored out.
|
| |
|
|
|
|
|
|
|
|
| |
The number of ignored functions is logged out, so this can help us avoid getting
into a situation where many testcases just trap most of the time rather than
doing anything useful.
50% seems a reasonable cutoff. Even if 50% of functions trap, at least we are
getting 50% that don't, so lots of useful work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new pass that analyzes the module to find the minimal subtyping relation
that is necessary to maintain the validity and semantics of the program and
rewrites the types to use this minimal relation. Besides eliminating references
to otherwise-unused intermediate types, this optimization should unlock
significant additional optimizing power in other type optimizations that are
constrained by having to maintain supertype validity, since after this new
optimization there are fewer and more general supertypes.
The analysis works by visiting each expression and module element to collect the
subtypings that are required to maintain its validity, then, using that as a
starting point, iteratively adding new subtypings required by type definitions
and casts until reaching a fixed point.
|
|
|
|
|
|
|
|
|
| |
TypeFinalization finalizes all types that we can, that is, all private types that have no
children. TypeUnFinalization unfinalizes (opens) all (private) types.
These could be used by first opening all types, optimizing, and then finalizing, as that
might find more opportunities.
Fixes #5933
|
|
|
|
| |
This setting is useful enough that there is basically no reason not to use it.
Turn it on by default to save some typing when running the fuzzer.
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases tuples are obviously not needed, such as when they are only used
in local operations and make/extract. Such tuples are not used as return values or
in control flow structures, so we might as well lower them to individual locals per
lane, which other passes can optimize a lot better.
I believe LLVM does the same with its own tuples: it lowers them as much as
possible, leaving only necessary ones.
Fixes #5923
|
|
|
|
|
| |
Remove the prompt for user confirmation when using the --auto-initial-contents
option with the fuzzer. It is not actionable, and it prevents me from going off
and doing something else when I build and start the fuzzer in the same command.
|
|
|
| |
Renaming the multimemory flag in Binaryen to match its naming in LLVM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GUFA refines existing casts, but does not add new casts for fear of increasing code size
and adding more cast operations at runtime. This PR adds a version that does add all
those casts, and it looks like at least code size improves rather than regresses, at least
on J2Wasm and Kotlin. That is, this pass adds a lot more casts, but subsequent
optimizations benefit enough to shrink overall code size.
However, this may still not be worthwhile, as even if code size decreases we may end
up doing more casts at runtime, and those casts might be hard to remove, e.g.:
(call $foo
(x) ;; inferred to be non-null
)
(func $foo (param (ref null $A)
=>
(call $foo
(ref.cast $A (x) ;; add a cast here
)
(func $foo (param (ref $A) ;; later pass refines here
That new cast cannot be removed after we refine the function parameter. If the
function never benefits from the fact that the input is non-null, then the cast is
wasted work (e.g. if the function only compares the input to another value).
To use this new pass, try --gufa-cast-all rather than --gufa. As with normal GUFA,
running the full optimizer afterwards is important, and even more important in
order to get rid of as many of the new casts as possible.
|
|
|
|
|
|
| |
Just look for export names as "" with some other stuff in the middle.
Missing from the old regex: spaces, parens, and probably more. Spaces and parens
are used in the test suite, which is how this was noticed by the fuzzer.
|
|
|
|
|
|
|
| |
No nop instruction is necessary in wasm, so in StackIR we can simply
remove them all.
Fixes #5745
|
|
|
|
| |
We already ignore OOMs in the interpreter. This adds the syntax for V8, which
I saw an error on now (on an array.new of a massive size).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to have a wasm-merge tool but removed it for a lack of use cases. Recently
use cases have been showing up in the wasm GC space and elsewhere, as people are
using more diverse toolchains together, for example a project might build some C++
code alongside some wasm GC code. Merging those wasm files together can allow
for nice optimizations like inlining and better DCE etc., so it makes sense to have a
tool for merging.
Background:
* Removal: #1969
* Requests:
* wasm-merge - why it has been deleted #2174
* Compiling and linking wat files #2276
* wasm-link? #2767
This PR is a compete rewrite of wasm-merge, not a restoration of the original
codebase. The original code was quite messy (my fault), and also, since then
we've added multi-memory and multi-table which makes things a lot simpler.
The linking semantics are as described in the "wasm-link" issue #2767 : all we do
is merge normal wasm files together and connect imports and export. That is, we
have a graph of modules and their names, and each import to a module name can
be resolved to that module. Basically, like a JS bundler would do for JS, or, in other
words, we do the same operations as JS code would do to glue wasm modules
together at runtime, but at compile time. See the README update in this PR for a
concrete example.
There are no plans to do more than that simple bundling, so this should not
really overlap with wasm-ld's use cases.
This should be fairly fast as it works in linear time on the total input code. However,
it won't be as fast as wasm-ld, of course, as it does build Binaryen IR for each
module. An advantage to working on Binaryen IR is that we can easily do some
global DCE after merging, and further optimizations are possible later.
|
|
|
|
|
|
|
|
|
| |
This removes the trapping export and all others after it. This avoids a potential
infinite loop that can happen when fuzzing TNH, as if TNH is set and a trap
happens then the optimizer can cause an iloop, and while that is valid, it would hang the
fuzzer. We could check for a timeout, but it is faster and more robust to just
remove the code we can't compare anyhow.
This uses wasm-metadce to remove the exports from the failing one.
|
|
|
|
|
|
|
|
|
|
|
| |
DCE at the end avoids issues with non-nullable local operations in unreachable
code, which is still being discussed. This PR avoids fuzzer errors for now, but we
should revert it when we have a proper fix.
See
* #5599
* #5665
* https://github.com/WebAssembly/function-references/issues/98
|
|
|
|
|
| |
After this change, the only type system usable from the tools will be the
standard isorecursive type system. The nominal type system is still usable via
the API, but it will be removed entirely in a follow-on PR.
|
|
|
|
|
|
|
|
|
|
| |
I ran CheckDeterminism at full throttle overnight (set to 1, and disabled
all other things) and it found a bug, so we should focus on that more.
Also ctor-eval as there is ongoing work there.
I reduced a few other priorities of things that haven't seen bugs in a
very long time and are not high priority.
|
|
|
| |
This is the default, and also used by J2Wasm.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
TypeMerging previously tried to merge types with their supertypes and siblings
in a single step, but this could cause a misoptimization in which a type was
merged with its parent's sibling without being merged with its parent, breaking
subtyping.
Fix the bug by merging with supertypes and siblings separately. Since we now
have multiple merging steps, also take the opportunity to run the sibling
merging step multiple times to exploit more merging opportunities.
Fixes #5556.
|
|
|
|
|
|
|
| |
For example, we might hit an allocation limit in the wasm, but the
optimized wasm might optimize that allocation out. So we need to
ignore comparisons in such cases, as we cannot expect the output
to be identical. We already do similar things for FuzzExec and
#5560 adds it for TrapsNeverHappen; this adds it to CompareVMs.
|
|
|
|
|
|
|
|
|
|
| |
If the program tries to allocate an infinite number of objects, but is
prevented from doing that by a null pointer trap, then after we run
with trapsNeverHappen the trap may fail to occur, and we'll hit the
host limitation on allocations. As a result, we'd be comparing one
run with a trap and one run that is meant to be ignored (as we ignore
runs with host limitations), and before this PR we'd error as we would
expect to find the normal output and not the "ignore this host
limitation" marker.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this we generate random GC types that may be used in creating
instructions later.
We don't create many instructions yet, which will be the next step after
this.
Also add some trivial assertions in some places, that have helped
debugging in the past.
Stop fuzzing TypeMerging for now due to #5556 , which this PR
uncovers.
|
| |
|
|
|
|
|
| |
If this number ever gets high then we would need to look into
why we ignore so much. Right now we seem to end up ignoring
much less than 1% which seems ok.
|
|
|
|
|
|
| |
After the recent improvements and fixes this is now simple and the fuzzer found
no more issues overnight for me.
Also adjust some existing frequencies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a type hierarchy has abstract classes in the middle, that is, types that
are never instantiated, then we can optimize casts and other operations
to them. Say in Java that we have `AbstractList`, and it only has one
subclass `IntList` that is ever created, then any place we have an `AbstractList`
we must actually have an `IntList`, or a null. (Or, if no subtype is instantiated,
then the value must definitely be a null.)
The actual implementation does a type mapping, that is, it finds all places
using an abstract type and makes them refer to the single instantiated
subtype (or null). After that change, no references to the abstract type
remain in the program, so this both refines types and also cleans up the
type section.
|
|
|
|
|
| |
Do not fuzz some new testcases that have imported memories. The fuzzer doesn't seem
to have support for that (it errors when it tries to do operations on them, since the
import hasn't been created).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since #5347 public types are never updated by type optimizations, but the
optimization passes have not yet been updated to take that into account, so they
are all buggy under an open world assumption. In #5359 we worked around many
closed world validation errors in the fuzzer by treating --closed-world like a
feature flag and checking whether it was necessary for fuzzer input, but that
did not prevent the type optimization passes from running under an open world,
so it did not work around all the potential issues.
Work around the problem more thoroughly by not running any type optimization
passes in the fuzzer without --closed-world. Also add logic to those passes to
error out if they are run without --closed-world and update the tests
accordingly.
|