| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
|
|
|
| |
Now that the features section adds on top of the commandline arguments,
it means the way we test if initial contents are ok to use will not work if
the wasm has a features section - as it will enable a feature, even if
we wanted to see if the wasm can work without that feature. To fix this,
strip the features section there.
|
|
|
|
|
|
| |
This works around the issue with wasm gc types sometimes getting
truncated (as the default names can be very long or even infinitely
recursive). If the truncation leads to name collision, the wast is not
valid.
|
|
|
|
|
|
|
|
|
|
| |
The features section is additive since #3960. For the fuzzer to know which features
are used, it therefore needs to also scan the features section. To do this,
run --print-features to get the total features used from both flags + the
features section.
A result of this is that we now have a list of enabled features instead of
"enable all, then disable". This is actually clearer I think, but it does require
inverting the logic in some places.
|
|
|
|
|
|
| |
Emscripten stopped emitting shell support code by default (as most users
run node.js, but here we are literally fuzzing d8).
Fixes #3967
|
|
|
|
|
|
|
| |
The support code there emits "low high" as the result, for example, 25 0 would
be 25 (as the high bits are all 0). This is different than how numbers are reported
in other things we fuzz, so this caused an error.
Fixes #3915
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we allocate some GC data, and do not let the reference escape, then we can
replace the allocation with locals, one local for each field in the allocation
basically. This avoids the allocation, and also allows us to optimize the locals
further.
On the Dart DeltaBlue benchmark, this is a 24% speedup (making it faster than
the JS version, incidentially), and also a 6% reduction in code size.
The tests are not the best way to show what this does, as the pass assumes
other passes will clean up after. Here is an example to clarify. First, in pseudocode:
ref = new Int(42)
do {
ref.set(ref.get() + 1)
} while (import(ref.get())
That is, we allocate an int on the heap and use it as a counter. Unnecessarily,
as it could be a normal int on the stack.
Wat:
(module
;; A boxed integer: an entire struct just to hold an int.
(type $boxed-int (struct (field (mut i32))))
(import "env" "import" (func $import (param i32) (result i32)))
(func "example"
(local $ref (ref null $boxed-int))
;; Allocate a boxed integer of 42 and save the reference to it.
(local.set $ref
(struct.new_with_rtt $boxed-int
(i32.const 42)
(rtt.canon $boxed-int)
)
)
;; Increment the integer in a loop, looking for some condition.
(loop $loop
(struct.set $boxed-int 0
(local.get $ref)
(i32.add
(struct.get $boxed-int 0
(local.get $ref)
)
(i32.const 1)
)
)
(br_if $loop
(call $import
(struct.get $boxed-int 0
(local.get $ref)
)
)
)
)
)
)
Before this pass, the optimizer could do essentially nothing with this.
Even with this pass, running -O1 has no effect, as the pass is only
used in -O2+. However, running --heap2local -O1 leads to this:
(func $0
(local $0 i32)
(local.set $0
(i32.const 42)
)
(loop $loop
(br_if $loop
(call $import
(local.tee $0
(i32.add
(local.get $0)
(i32.const 1)
)
)
)
)
)
)
All the GC heap operations have been removed, and we just
have a plain int now, allowing a bunch of other opts to run. That
output is basically the optimal code, I think.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We only ignored known issues if the process failed. However, some things
do not bring the process down, but are still necessary to ignore, which this
fixes.
Another approach might be to make all the things we need to ignore fail
the entire process. However, that could be annoying for other debugging:
we don't want a host limit on say hitting a VM limit on recursion to bring
down the entire process, as those limits manifest as traps, and we can
still run after them (and do need to test that). The specific host limit that
made me fix this was the trap on OOM when trying to allocate an array
of size 4GB.
|
|
|
|
|
|
|
|
|
|
| |
There is a conflict between multivalue and GC, see the details in the
comment. There isn't a good way to get the fuzzer to avoid the combination
of them, and GC is more urgent, so disable multivalue in that area for now.
(This does not disable all multivalue fuzzing - the fuzzer can still emit stuff.
This just disables initial content from test suite having multivalue, which
is enough for now, until the fuzzer can emit more GC things, and then we'll
need to do more.)
|
|
|
|
|
|
|
|
|
| |
Host limitations are arbitrary and can be modified by optimizations, so
ignore them. For example, if the optimizer removes allocations then a
host limit on an allocation error may vanish. Or, an optimization that
removes recursion and replaces it with a loop may avoid a host limit
on call depth (that is not done currently, but might some day).
This removes a class of annoying false positives in the fuzzer.
|
|
|
|
|
|
| |
The fuzzer doesn't generate much GC code yet, but it does fuzz things in
the test suite and adds fuzz to them. This PR allows GC when using initial
content, and also in CompareVMs, both of which have been fuzzed for
days locally for me with no issues.
|
| |
|
|
|
|
|
|
| |
We give br_if a too specific type: #3767
This is only noticeable with GC, and in rare cases where the type of br_if
is actually used - which realistically it never is, so really just fuzzer testcases.
|
|
|
| |
RTTs are not defaultable, and we cannot spill them to locals.
|
|
|
|
|
|
|
|
|
| |
The problem is that a tuple with a non-nullable element cannot be stored
to a local. We'd need to split up the tuple, but that raises questions about
what should be allowed in flat IR (we'd need to allow nested tuple ops
in more places). That combination doesn't seem urgent, so add a clear
error for now, and avoid it in the fuzzer.
Avoids #3759 in the fuzzer
|
| |
|
|
|
|
|
| |
The check for a valid wasm file must be different if the wasm has
a feature section or not, so just try both ways, with --detect-features
and --all-features. If the wasm is valid, at least one will work.
|
|
|
|
| |
And demonstrate its capabilities by porting all tests of the
optimize-instructions pass to use lit and FileCheck.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds rtt.canon and rtt.sub together with RTT type support
that is necessary for them. Together this lets us test roundtripping the
instructions and types.
Also fixes a missing traversal over globals in collectHeapTypes,
which the example from the GC docs requires, as the RTTs are in
globals there.
This does not yet add full interpreter support and other things. It
disables initial contents on GC in the fuzzer, to avoid the fuzzer
breaking.
Renames the binary ID for exnref, which is being removed from
the spec, and which overlaps with the binary ID for rtt.
|
|
|
|
|
|
|
| |
When running d8, run it in liftoff, to avoid tiering up causing nondeterminism
in the results.
When we do want to compare the tiers, we already do so in CompareVMs. This
fixes others places where we just wanted to run some JS in some VM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bugs (#3401)
* Count signatures in tuple locals.
* Count nested signature types (confirming @aheejin was right, that was missing).
* Inlining was using the wrong type.
* OptimizeInstructions should return -1 for unhandled types, not error.
* The fuzzer should check for ref types as well, not just typed function references,
similar to what GC does.
* The fuzzer now creates a function if it has no other option for creating a constant
expression of a function type, then does a ref.func of that.
* Handle unreachability in call_ref binary reading.
* S-expression parsing fixes in more places, and add a tiny fuzzer for it.
* Switch fuzzer test to just have the metrics, and not print all the fuzz output which
changes a lot. Also fix noprint handling which only worked on binaries before.
* Fix Properties::getLiteral() to use the specific function type properly, and make
Literal's function constructor require that, to prevent future bugs.
* Turn all input types into nullable types, for now.
|
|
|
|
|
|
|
|
|
| |
Previously we picked one of the two compilers at the top level. But that doesn't
actually compare between them directly - each entire run used one of the two.
Instead, add separate "VMs" for each of them, and keep the existing D8 VM as
well (which tests tiering up).
The code also seems nicer this way.
|
|
|
|
|
|
|
|
|
|
|
|
| |
OptimizeInstructions is seeing the most work these days, so it's good for
the fuzzer to focus on that some more.
Also move some code around in the main test wast: it's useful to put each
feature in its own module to maximize the chance of getting them to be used.
That is, if a module has a single use of atomics, then if atomics are disabled
in the current run, we can't use any of the module and we skip initial contents
entirely. Moving each feature to it's own module reduces that risk. (We do
pick randomly between the modules, and atm a small module has the same
chance as a big one, but this still seems worth it.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the fuzzer constructed a new random valid wasm file from
scratch. The new --initial-fuzz=FILENAME option makes it start from
an existing wasm file, and then add random contents on top of that. It
also randomly modifies the existing contents, for example tweaking
a Const, replacing some nodes with other things of the same type, etc.
It also has a chance to replace a drop with a logging (as some of our
tests just drop a result, and we match the optimized output's wasm
instead of the result; by logging, the fuzzer can check things).
The goal is to find bugs by using existing hand-written testcases as
a basis. This PR uses the test suite's testcases as initial fuzz contents.
This can find issues as they often check for corner cases - they are
designed to be "interesting", which random data may be less likely
to find.
This has found several bugs already, see recent fuzz fixes. I mentioned
the first few on Twitter but past 4 I stopped counting...
https://twitter.com/kripken/status/1314323318036602880
This required various changes to the fuzzer's generation to account
for the fact that there can be existing functions and so forth before
it starts to run, so it needs to avoid collisions and so forth.
|
|
|
|
| |
We used to either apply all, or pick each at random. Also add a chance
to pick none at all.
|
|
|
|
| |
If we can't run the -ttf stage, there is no point in printing out the
instructions to reduce things - we can't reduce without a wasm.
|
|
|
|
| |
(#3269)
|
|
|
|
|
|
| |
This PR contains:
- Changes that enable/disable tests on Windows to allow for better local testing.
- Also changes many abort() into Fatal() when it is really just exiting on error. This is because abort() generates a dialog window on Windows which is not great in automated scripts.
- Improvements to CMake to better work with the project in IDEs (VS).
|
|
|
| |
Adds the `--enable-gc` feature flag, so far enabling the `anyref` type incl. subtyping, and removes the temporary `--enable-anyref` feature flag that it replaces.
|
|
|
| |
Since #3050 the fuzzer can test out-of-tree builds using the `--binaryen-bin` argument, but the argument was not yet added to the generated `reduce.sh` on fuzzing failures. This change adds it.
|
|
|
| |
Adds `anyref` type, which is enabled by a new feature `--enable-anyref`. This type is primarily used for testing that passes correctly handle subtype relationships so that the codebase will continue to be prepared for future subtyping. Since `--enable-anyref` is meaningless without also using `--enable-reference-types`, this PR also makes it a validation error to pass only the former (and similarly makes it a validation error to enable exception handling without enabling reference types).
|
|
|
|
|
|
|
|
| |
Can now run scripts/fuzz_opt.py --binaryen-bin build/bin [opts...] to fuzz an
out-of-tree build
Handle positional arguments by looking at shared.requested (with options removed)
instead of raw sys.argv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Comparing to the interpreter, and not just wasm2js to itself (which we've
done on the same file before and after opts) ensures wasm2js has the right
semantics.
To do this, we need to make sure the wasm doesn't contain things where
wasm2js semantics diverge from normal wasm, which includes:
* Legalize so that there are no i64 exports.
* Remove operations JS can't handle with full precision, like i64 -> f32.
* Force all loads/stores to be 1-byte, as unexpectedly-unaligned operations
fail in wasm2js.
This also requires ignoring subnormals when comparing between JS
VMs and the interpreter.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
wasm2js fuzzing should not compare outputs if the wasm would
trap. wasm2js traps on far fewer things, and if wasm would trap
(like an indirect call with the wrong type) it can just do weird undefined
things. Previously, if running wasm2js trapped then we ignored
the output, but that't not good enough, as we need to check if
wasm would, exactly for the cases just mentioned where wasm
would trap but wasm2js wouldn't. So run the wasm interpreter
to see if that happens.
When we see such a trap, ignore everything from that function
call onwards. This at least lets us compare the results of
previous calls, which adds some amount of coverage (before
we just ignored the entire output completely, so only if there
was no trap at all did we do any comparisons at all).
Also give better names than "js.js" to the JS files wasm2js
fuzzing creates.
|
|
|
|
| |
compilers (#2961)
|
|
|
|
|
|
| |
Use WASM_RT_SETJMP so we use sigsetjmp when we need to.
Also disable signals in emcc+wasm2c in the fuzzer. emcc looks like
unix, so it enters the ifdef to use signals, but wasm has no signals...
|
| |
|
|
|
|
|
|
|
|
|
| |
This finds out which locals are live at call sites that might pause/resume,
which is the set of locals we need to actually save/load. That is, if a local
is not alive at any call site in the function, then it's value doesn't need to
stay alive while sleeping.
This saves about 10% of locals that are saved/loaded, and about 1.5%
in final code size.
|
|
|
| |
fixes #2915
|
|
|
|
|
|
|
|
|
| |
When the fuzzer script is given a wasm we don't create a new
one from scratch. But we should still apply --denan and other
things so that we preserve those properties while reducing.
Without this it's possible for reduction to start with a wasm with
no nans but to lose the property eventually, and end up with a
reduced testcase which is not quite what you want.
|
|
|
|
|
| |
We already avoid that in CompareVMs but Asyncify has the
same issue, as it also can optimize in binaryen but run in
another VM (with different nondeterministic NaN behavior).
|
|
|
|
|
|
|
| |
FUZZ_OPTS are things like --denan which affect generation of
the original wasm. We were passing those to testcase handlers,
which meant that in addition to the random optimizations we picked
they were also running --denan etc. again. That can be confusing,
so remove it.
|
|
|
|
|
|
| |
This moves the fuzzer de-NaN logic out into a separate pass. This is
cleaner and also better since the old way would de-NaN once, but then
the reducer could generate code with nans. The new way lets us de-NaN
while reducing.
|
| |
|
|
|
|
|
|
|
| |
Currently the fuzzer only generate a script for wasm-reduce only when it
first discovers a case and doesn't do that when a seed is given. But
when you get a bug report with a seed or you want to reproduce the
situation, is still helpful if the fuzzer generates a reduce script for
you.
|
|
|
|
|
|
|
| |
Use --emit-target-features and --detect-features.
Rename Asyncify handler temp files (I happened to notice they overwrote other files, which was annoying.
Fixes #2831
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this, when it finds a bug all you need to do is copy-paste
a single line and it runs the reducer. To do that, it creates a
reducer script and fills it out for you. The script has all the docs
we used to print out to the console, so the console output is
more focused and concise now.
In most cases just running the single line it suggests should
work, however I found that it doesn't always. One reason is
#2831
Also add a missing sys.exit(1), without which we returned 0
despite printing out an error on a failing testcase.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't run wasm2c2wasm on large wasm files, as it can OOM the VM.
I've seen this happen on a single wasm on multiple engines, so it's not
a specific engine bug. Sticking to below-average wasm sizes seems ok.
That change means that we check if a vm can run on a per-wasm basis.
That required some refactoring as it means we may be able to run on
the "before" wasm but not the "after" one, or vice versa.
Fix a tiny nondeterminism issue with iterating on a set().
Some tiny docs improvements to the error shown when a bug is found.
|