| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
That PR renamed test/lit/optimize-instructions.wast to
test/lit/optimize-instructions-mvp.wast. However, the fuzzer was explicitly
adding the testto the list of important initial contents under the old name, so
it was failing an assertion that the initial contents existed. Update the fuzzer
to use the new test name.
|
|
|
|
|
|
|
| |
These operations emit a completely different type than their input, so they must be
marked as roots, and not as things that flow values through them (because then
we filter everything out as the types are not compatible).
Fixes #5219
|
| |
|
|
|
|
|
| |
The fuzzer started to fail on the recent externalize/internalize test
that was added in #5175 as we lack interpreter support. Move that to a separate
file and ignore it in the fuzzer for now.
|
|
|
| |
Also fix some formatting issue in the file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a map of function name => the effects of that function to the
PassOptions structure. That lets us compute those effects once and then
use them in multiple passes afterwards. For example, that lets us optimize
away a call to a function that has no effects:
(drop (call $nothing))
[..]
(func $nothing
;; .. lots of stuff but no effects, only a returned value ..
)
Vacuum will remove that dropped call if we tell it that the called function has
no effects. Note that a nice result of adding this to the PassOptions struct
is that all passes will use the extra info automatically.
This is not enabled by default as the benefits seem rather minor, though it
does help in a small but noticeable way on J2Wasm code, where we use
call.without.effects and have situations like this:
(func $foo
(call $bar)
)
(func $bar
(call.without.effects ..)
)
The call to bar looks like it has effects, normally, but with global effect info
we know it actually doesn't.
To use this, one would do
--generate-global-effects [.. some passes that use the effects ..] --discard-global-effects
Discarding is not necessary, but if there is a pass later that adds effects, then not
discarding could lead to bugs, since we'd think there are fewer effects than there are.
(However, normal optimization passes never add effects, only remove them.)
It's also possible to call this multiple times:
--generate-global-effects -O3 --generate-global-effects -O3
That computes affects after the first -O3, and may find fewer effects than earlier.
This doesn't compute the full transitive closure of the effects across functions. That is,
when computing a function's effects, we don't look into its own calls. The simple case
so far is enough to handle the call.without.effects example from before (though it
may take multiple optimization cycles).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In practice typed function references will not ship before GC and is not
independently useful, so it's not necessary to have a separate feature for it.
Roll the functionality previously enabled by --enable-typed-function-references
into --enable-gc instead.
This also avoids a problem with the ongoing implementation of the new GC bottom
heap types. That change will make all ref.null instructions in Binaryen IR refer
to one of the bottom heap types. But since those bottom types are introduced in
GC, it's not valid to emit them in binaries unless unless GC is enabled. The fix
if only reference types is enabled is to emit (ref.null func) instead
of (ref.null nofunc), but that doesn't always work if typed function references
are enabled because a function type more specific than func may be required.
Getting rid of typed function references as a separate feature makes this a
nonissue.
|
|
|
|
| |
These new GC instructions infallibly convert between `extern` and `any`
references now that those types are not in the same hierarchy.
|
|
|
|
| |
Also fix a small logic error - call lines can be prefixes of each other, so use
the full line (including newline) to differentiate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This mode is tricky to fuzz because the mode is basically "assume traps never
happen; if a trap does happen, that is undefined behavior". So if any trap occurs
in the random fuzz testcase, we can't optimize with -tnh and assume the results
stay to same. To avoid that, we ignore all functions from the first one that traps,
that is, we only compare the code that ran without trapping. That often is a small
subset of the total functions, sadly, but I do see that this ends up with some useful
coverage despite the drawback.
This also requires some fixes to comparing of references, specifically, funcrefs
are printed with the function name/index, but that can change during opts, so
ignore that. This wasn't noticed before because this new fuzzer mode does
comparisons of --fuzz-exec-before output, instead of doing --fuzz-exec which
runs it before and after and compares it internally in wasm-opt. Here we are
comparing the output externally, which we didn't do before.
|
|
|
|
|
|
|
| |
This PR removes the single memory restriction in IR, adding support for a single module to reference multiple memories. To support this change, a new memory name field was added to 13 memory instructions in order to identify the memory for the instruction.
It is a goal of this PR to maintain backwards compatibility with existing text and binary wasm modules, so memory indexes remain optional for memory instructions. Similarly, the JS API makes assumptions about which memory is intended when only one memory is present in the module. Another goal of this PR is that existing tests behavior be unaffected. That said, tests must now explicitly define a memory before invoking memory instructions or exporting a memory, and memory names are now printed for each memory instruction in the text format.
There remain quite a few places where a hardcoded reference to the first memory persist (memory flattening, for example, will return early if more than one memory is present in the module). Many of these call-sites, particularly within passes, will require us to rethink how the optimization works in a multi-memories world. Other call-sites may necessitate more invasive code restructuring to fully convert away from relying on a globally available, single memory pointer.
|
| |
|
|
|
|
|
|
|
| |
* better
* fix
* undo
|
| |
|
|
|
|
|
|
|
| |
RTTs were removed from the GC spec and if they are added back in in the future,
they will be heap types rather than value types as in our implementation.
Updating our implementation to have RTTs be heap types would have been more work
than deleting them for questionable benefit since we don't know how long it will
be before they are specced again.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This tracks the possible contents in the entire program all at once using a single IR.
That is in contrast to say DeadArgumentElimination of LocalRefining etc., all of whom
look at one particular aspect of the program (function params and returns in DAE,
locals in LocalRefining). The cost is to build up an entire new IR, which takes a lot
of new code (mostly in the already-landed PossibleContents). Another cost
is this new IR is very big and requires a lot of time and memory to process.
The benefit is that this can find opportunities that are only obvious when looking
at the entire program, and also it can track information that is more specialized
than the normal type system in the IR - in particular, this can track an ExactType,
which is the case where we know the value is of a particular type exactly and not
a subtype.
|
|
|
|
|
|
| |
#4758 regressed the determinism fuzzer. First, FEATURE_OPTS is not a
list but a string. Second, as a hack another location would add --strip-dwarf
to that list, but that only works in wasm-opt. To fix that, remove the hack
and instead just ignore DWARF-containing files.
|
|
|
|
|
|
|
|
| |
This starts to implement the Wasm Strings proposal
https://github.com/WebAssembly/stringref/blob/main/proposals/stringref/Overview.md
This just adds the types.
|
|
|
|
|
|
|
|
|
|
|
| |
Nominal types don't make much sense without GC, and in particular trying to emit
them with typed function references but not GC enabled can result in invalid
binaries because nominal types do not respect the type ordering constraints
required by the typed function references proposal. Making this change was
mostly straightforward, but required fixing the fuzzer to use --nominal only
when GC is enabled and required exiting early from nominal-only optimizations
when GC was not enabled.
Fixes #4756.
|
| |
|
|
|
| |
Relates to #4741
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This optimizes constants in the megamorphic case of two: when we
know two function references are possible, we could in theory emit this:
(select
(ref.func A)
(ref.func B)
(ref.eq
(..ref value..) ;; globally, only 2 things are possible here, and one has
;; ref.func A as its value, and the other ref.func B
(ref.func A))
That is, compare to one of the values, and emit the two possible values there.
Other optimizations can then turn a call_ref on this select into an if over
two direct calls, leading to devirtualization.
We cannot compare a ref.func directly (since function references are not
comparable), and so instead we look at immutable global structs. If we
find a struct type that has only two possible values in some field, and
the structs are in immutable globals (which happens in the vtable case
in j2wasm for example), then we can compare the references of the struct
to decide between the two values in the field.
|
|
|
|
| |
If we use it as initial contents, we will try to execute it, and hit the TODOs
in the interpreter for unimplemented parts.
|
|
|
|
|
|
| |
Without this the error from a bad given wasm file - either by the user, or during
a reduction where smaller wasms are given - could be very confusing. A bad
given wasm file during reduction, in particular, indicates a bug in the reducer
most likely.
|
|
|
|
|
| |
Without this the reduction will fail on not being able to
parse the input file, if the input file depends on
nominal typing.
|
|
|
|
|
| |
wasm-dis does enable all features by default, so we don't need the
feature flags, but we do need --nominal etc. since we emit such
modules now.
|
|
|
|
|
|
|
| |
Instead of a raw run command, use the helper function, which adds the
feature flags. That adds --nominal which is needed more now after #4625
This fixes the fuzz failures mentioned in #4625 (comment)
|
|
|
| |
Add a flag to make it easy to pick which typesystem to test.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new signature-pruning pass that prunes parameters from
signature types where those parameters are never used in any function
that has that type. This is similar to DeadArgumentElimination but works
on a set of functions, and it can handle indirect calls.
Also move a little code from SignatureRefining into a shared place to
avoid duplication of logic to update signature types.
This pattern happens in j2wasm code, for example if all method functions
for some virtual method just return a constant and do not use the this
pointer.
|
|
|
|
|
|
|
|
|
| |
This enables fuzzing EH with initial contents. fuzzing.cpp/h does not
yet support generation of EH instructions, but with this we can still
fuzz EH based on initial contents.
The fuzzer ran successfully for more than 1,900,000 iterations, with my
local modification that always enables EH and lets the fuzzer select
only EH tests for its initial contents.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Fuzzer shows the initial contents and prompts the user if they want to
proceed after #4173, but when the fuzzer is used within a script called
from `wasm-reduce`, we shouldn't pause for the user input. This shows
the prompt only when there is no seed given.
To do that, we now initialize the important initial contents from
`main`. We used to assign those variables before we start `main`.
|
|
|
|
|
|
|
|
|
| |
See #4149
This modifies the test added in #4163 which used static casts on
dynamically-created structs and arrays. That was technically not
valid (as we won't want users to "mix" the two forms). This makes that
test 100% static, which both fixes the test and gives test coverage
to the new instructions added here.
|
|
|
|
|
|
|
|
|
| |
This is another attempt to address #4073. Instead of relying on the
timestamp, this examines git log to gather the list of test files added
or modified within some fixed number of days. The number of days is
currently set to 30 (= 1 month) but can be changed. This will be enabled
by `--auto-initial-contents`, which is now disabled by default.
Hopefully fixes #4073.
|
|
|
|
|
|
|
|
|
|
|
|
| |
We added an optional ReFinalize in OptimizeInstructions at some point,
but that is not valid: The ReFinalize only updates types when all other
works is done, but the pass works incrementally. The bug the fuzzer found
is that a child is changed to be unreachable, and then the parent is
optimized before finalize() is called on it, which led to an assertion being
hit (as the child was unreachable but not the parent, which should also
be).
To fix this, do not change types in this pass. Emit an extra block with a
declared type when necessary. Other passes can remove the extra block.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR helps with functions like this:
function foo(x) {
if (x) {
..
lots of work here
..
}
}
If "lots of work" is large enough, then we won't inline such a
function. However, we may end up calling into the function
only to get a false on that if and immediately exit. So it is useful
to partially inline this function, basically by creating a split
of it into a condition part that is inlineable
function foo$inlineable(x) {
if (x) {
foo$outlined();
}
}
and an outlined part that is not inlineable:
function foo$outlined(x) {
..
lots of work here
..
}
We can then inline the inlineable part. That means that a call
like
foo(param);
turns into
if (param) {
foo$outlined();
}
In other words, we end up replacing a call and then a check with
a check and then a call. Any time that the condition is false, this
will be a speedup.
The cost here is increased size, as we duplicate the condition
into the callsites. For that reason, only do this when heavily
optimizing for size.
This is a 10% speedup on j2cl. This helps two types of functions
there: Java class inits, which often look like "have I been
initialized before? if not, do all this work", and also assertion
methods which look like "if the input is null, throw an
exception".
|
|
|
|
|
|
| |
Avoids a crash in calling getHeapType when there isn't one.
Also add the relevant lit test (and a few others) to the list of files to
fuzz more heavily.
|
|
|
|
|
|
|
|
|
|
|
|
| |
tablify() attempts to turns a sequence of br_ifs into a single
br_table. This PR adds some flexibility to the specific pattern it
looks for, specifically:
* Accept i32.eqz as a comparison to zero, and not just to look
for i32.eq against a constant.
* Allow the first condition to be a tee. If it is, compare later
conditions to local.get of that local.
This will allow more br_tables to be emitted in j2cl output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some functions run only once with this pattern:
function foo() {
if (foo$ran) return;
foo$ran = 1;
...
}
If that global is not ever set to 0, then the function's payload (after the
initial if and return) will never execute more than once. That means we
can optimize away dominated calls:
foo();
foo(); // we can remove this
To do this, we find which globals are "once", which means they can
fit in that pattern, as they are never set to 0. If a function looks like the
above pattern, and it's global is "once", then the function is "once" as
well, and we can perform this optimization.
This removes over 8% of static calls in j2cl.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we catches "Output must be deterministic" we can't see any details. This PR fix
this and now we can see diff of b1.wasm and b2.wasm files.
Example output:
Output must be deterministic.
Diff:
--- expected
+++ actual
@@ -2072,9 +2072,7 @@
)
(drop
(block $label$16 (result funcref)
- (local.set $10
- (ref.null func)
- )
+ (nop)
(drop
(call $22
(f64.const 0.296)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Technically this is not a new pass, but it is a rewrite almost from scratch.
Local Common Subexpression Elimination looks for repeated patterns,
stuff like this:
x = (a + b) + c
y = a + b
=>
temp = a + b
x = temp + c
y = temp
The old pass worked on flat IR, which is inefficient, and was overly
complicated because of that. The new pass uses a new algorithm that
I think is pretty simple, see the detailed comment at the top.
This keeps the pass enabled only in -O4, like before - right after
flattening the IR. That is to make this as minimal a change as possible.
Followups will enable the pass in the main pipeline, that is, we will
finally be able to run it by default. (Note that to make the pass work
well after flatten, an extra simplify-locals is added - the old pass used
to do part of simplify-locals internally, which was one source of
complexity. Even so, some of the -O4 tests have changes, due to
minor factors - they are just minor orderings etc., which can be
seen by inspecting the outputs before and after using e.g.
--metrics)
This plus some followup work leads to large wins on wasm GC output.
On j2cl there is a common pattern of repeated struct.gets, so common
that this pass removes 85% of all struct.gets, which makes the total
binary 15% smaller. However, on LLVM-emitted code the benefit is
minor, less than 1%.
|
| |
|
|
|
|
|
|
|
| |
Now that the features section adds on top of the commandline arguments,
it means the way we test if initial contents are ok to use will not work if
the wasm has a features section - as it will enable a feature, even if
we wanted to see if the wasm can work without that feature. To fix this,
strip the features section there.
|
|
|
|
|
|
| |
This works around the issue with wasm gc types sometimes getting
truncated (as the default names can be very long or even infinitely
recursive). If the truncation leads to name collision, the wast is not
valid.
|
|
|
|
|
|
|
|
|
|
| |
The features section is additive since #3960. For the fuzzer to know which features
are used, it therefore needs to also scan the features section. To do this,
run --print-features to get the total features used from both flags + the
features section.
A result of this is that we now have a list of enabled features instead of
"enable all, then disable". This is actually clearer I think, but it does require
inverting the logic in some places.
|
|
|
|
|
|
| |
Emscripten stopped emitting shell support code by default (as most users
run node.js, but here we are literally fuzzing d8).
Fixes #3967
|
|
|
|
|
|
|
| |
The support code there emits "low high" as the result, for example, 25 0 would
be 25 (as the high bits are all 0). This is different than how numbers are reported
in other things we fuzz, so this caused an error.
Fixes #3915
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we allocate some GC data, and do not let the reference escape, then we can
replace the allocation with locals, one local for each field in the allocation
basically. This avoids the allocation, and also allows us to optimize the locals
further.
On the Dart DeltaBlue benchmark, this is a 24% speedup (making it faster than
the JS version, incidentially), and also a 6% reduction in code size.
The tests are not the best way to show what this does, as the pass assumes
other passes will clean up after. Here is an example to clarify. First, in pseudocode:
ref = new Int(42)
do {
ref.set(ref.get() + 1)
} while (import(ref.get())
That is, we allocate an int on the heap and use it as a counter. Unnecessarily,
as it could be a normal int on the stack.
Wat:
(module
;; A boxed integer: an entire struct just to hold an int.
(type $boxed-int (struct (field (mut i32))))
(import "env" "import" (func $import (param i32) (result i32)))
(func "example"
(local $ref (ref null $boxed-int))
;; Allocate a boxed integer of 42 and save the reference to it.
(local.set $ref
(struct.new_with_rtt $boxed-int
(i32.const 42)
(rtt.canon $boxed-int)
)
)
;; Increment the integer in a loop, looking for some condition.
(loop $loop
(struct.set $boxed-int 0
(local.get $ref)
(i32.add
(struct.get $boxed-int 0
(local.get $ref)
)
(i32.const 1)
)
)
(br_if $loop
(call $import
(struct.get $boxed-int 0
(local.get $ref)
)
)
)
)
)
)
Before this pass, the optimizer could do essentially nothing with this.
Even with this pass, running -O1 has no effect, as the pass is only
used in -O2+. However, running --heap2local -O1 leads to this:
(func $0
(local $0 i32)
(local.set $0
(i32.const 42)
)
(loop $loop
(br_if $loop
(call $import
(local.tee $0
(i32.add
(local.get $0)
(i32.const 1)
)
)
)
)
)
)
All the GC heap operations have been removed, and we just
have a plain int now, allowing a bunch of other opts to run. That
output is basically the optimal code, I think.
|
| |
|