| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
| |
Per the wasm spec guidelines for Load (rule 10) & Store (rule 12), this PR adds an option for bounds checking, producing a runtime error if the instruction exceeds the bounds of the particular memory within the combined memory.
|
|
|
|
|
|
|
|
| |
This finds types that can be merged into their super: types that add no
fields, and are not used in casts, etc. - so we might as well use the super.
This complements TypeSSA, in that it can merge back the new types that
TypeSSA created, if we never found a use for them. Without this, TypeSSA
can bloat binary size quite a lot (I see 10-20%).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This creates new nominal types for each (interesting) struct.new. That then allows
type-based optimizations to be more precise, as those optimizations will track
separate info for each struct.new, in effect. That is kind of like SSA, however, we
do not handle merges. For example:
x = struct.new $A (5);
print(x.value);
y = struct.new $A (11);
print(y.value);
// => //
x = struct.new $A.x (5);
print(x.value);
y = struct.new $A.y (11);
print(y.value);
After the pass runs each of those struct.new creates a unique type, and type-based
analysis can see that 5 or 11 are the only values written in that type (if nothing else
writes there).
This bloats the type section with the new subtypes, so it is best used with a pass
to merge unneeded duplicate types, which a later PR will add. That later PR will
exactly merge back in the types created here, which are nominally different but
indistinguishable otherwise.
This pass is not enabled by default. It's not clear yet where is the best place to do it,
as it must be balanced by type merging, but it might be better to do multiple
rounds of optimization between the two. Needs more investigation.
|
|
|
| |
The flag does nothing so far.
|
|
|
|
| |
Equirecursive is no longer standards track and its implementation is extremely
complex. Remove it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(some.operation
(ref.cast .. (local.get $ref))
(local.get $ref)
)
=>
(some.operation
(local.tee $temp
(ref.cast .. (local.get $ref))
)
(local.get $temp)
)
This can help cases where we cast for some reason but happen to not use the
cast value in all places. This occurs in j2wasm in itable calls sometimes: The
this pointer is is refined, but the itable may be done with an unrefined pointer,
which is less optimizable.
So far this is just inside basic blocks, but that is enough for the cast of itable
calls and other common patterns I see.
|
|
|
|
| |
Fixes #5250
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Monomorphization finds cases where we send more refined types to a function
than it declares. In such cases we can copy the function and refine the parameters:
// B is a subtype of A
foo(new B());
function foo(x : A) { ..}
=>
foo_B(new B()); // call redirected to refined copy
function foo(x : A) { ..} // unchanged
function foo_B(x : B) { ..} // refined copy
This increases code size so it may not be worth it in all cases. This initial PR is
hopefully enough to start experimenting with this on performance, and so it does
not enable the pass by default.
This adds two variations of monomorphization, one that always does it, and the
default which is "careful": it sees whether monomorphizing lets the refined function
actually be better than the original (say, by removing a cast). If there is no
improvement then we do not make any changes. This saves a significant amount
of code size - on j2wasm the careful version increases by 13% instead of 20% -
but it does run more slowly obviously.
|
|
|
|
|
|
|
|
|
| |
This sorts globals by their usage (and respecting dependencies). If the module
has very many globals then using smaller LEBs can matter.
If there are fewer than 128 globals then we cannot reduce size, and the pass
exits early (so this pass will not slow down MVP builds, which usually have just
1 global, the stack pointer). But with wasm GC it is common to use globals for
vtables etc., and often there is a very large number of them.
|
|
|
|
|
|
|
|
|
|
| |
Adds a multi-memories lowering pass that will create a single combined memory from the memories added to the module. This pass assumes that each memory is configured the same (type, shared).
This pass also:
- replaces existing memory.size instructions with a custom function that returns the size of each memory as if they existed independently
- replaces existing memory.grow instructions with a custom function, using global offsets to track the page size of each memory so data doesn't overlap in the singled combined memory
- adjusts the offsets of active data segments
- adjusts the offsets of Loads/Stores
|
|
|
|
|
| |
This allows a three-step upgrade process where binaryen is updated with this
change, then users remove their use of these flags, then binaryen can remove the
flags permanently.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a map of function name => the effects of that function to the
PassOptions structure. That lets us compute those effects once and then
use them in multiple passes afterwards. For example, that lets us optimize
away a call to a function that has no effects:
(drop (call $nothing))
[..]
(func $nothing
;; .. lots of stuff but no effects, only a returned value ..
)
Vacuum will remove that dropped call if we tell it that the called function has
no effects. Note that a nice result of adding this to the PassOptions struct
is that all passes will use the extra info automatically.
This is not enabled by default as the benefits seem rather minor, though it
does help in a small but noticeable way on J2Wasm code, where we use
call.without.effects and have situations like this:
(func $foo
(call $bar)
)
(func $bar
(call.without.effects ..)
)
The call to bar looks like it has effects, normally, but with global effect info
we know it actually doesn't.
To use this, one would do
--generate-global-effects [.. some passes that use the effects ..] --discard-global-effects
Discarding is not necessary, but if there is a pass later that adds effects, then not
discarding could lead to bugs, since we'd think there are fewer effects than there are.
(However, normal optimization passes never add effects, only remove them.)
It's also possible to call this multiple times:
--generate-global-effects -O3 --generate-global-effects -O3
That computes affects after the first -O3, and may find fewer effects than earlier.
This doesn't compute the full transitive closure of the effects across functions. That is,
when computing a function's effects, we don't look into its own calls. The simple case
so far is enough to handle the call.without.effects example from before (though it
may take multiple optimization cycles).
|
|
|
| |
Adds an --in-secondary-memory switch to the wasm-split tool that allows profile data to be stored in a separate memory from module main memory. With this option, users do not need to reserve the initial memory region for profile data and the data can be shared between multiple threads.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In practice typed function references will not ship before GC and is not
independently useful, so it's not necessary to have a separate feature for it.
Roll the functionality previously enabled by --enable-typed-function-references
into --enable-gc instead.
This also avoids a problem with the ongoing implementation of the new GC bottom
heap types. That change will make all ref.null instructions in Binaryen IR refer
to one of the bottom heap types. But since those bottom types are introduced in
GC, it's not valid to emit them in binaries unless unless GC is enabled. The fix
if only reference types is enabled is to emit (ref.null func) instead
of (ref.null nofunc), but that doesn't always work if typed function references
are enabled because a function type more specific than func may be required.
Getting rid of typed function references as a separate feature makes this a
nonissue.
|
|
|
|
|
|
|
| |
Add a pass that wraps all imports and exports with functions that handle
storing and passing along the suspender externref needed for JSPI.
https://github.com/WebAssembly/js-promise-integration/blob/main/proposals/js-promise-integration/Overview.md
|
|
|
| |
Adding multi-memories to the the list of wasm-features.
|
|
|
|
| |
This is no longer needed by emscripten as of:
https://github.com/emscripten-core/emscripten/pull/16529
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are several reasons why a function may not be trained in deterministically.
So to perform quick validation we need to inspect profile.data (another ways requires split to be performed). However as profile.data is a binary file and is not self sufficient, so we cannot currently use it to perform such validation.
Therefore to allow quick check on whether a particular function has been trained in, we need to dump profile.data in a more readable format.
This PR, allows us to output, the list of functions to be kept (in main wasm) and those split functions (to be moved to deferred.wasm) in a readable format, to console.
Added a new option `--print-profile`
- input path to orig.wasm (its the original wasm file that will be used later during split)
- input path to profile.data that we need to output
optionally pass `--unescape`
to unescape the function names
Usage:
```
binaryen\build>bin\wasm-split.exe test\profile_data\MY.orig.wasm --print-profile=test\profile_data\profile.data > test\profile_data\out.log
```
note: meaning of prefixes
`+` => fn to be kept in main wasm
`-` => fn to be split and moved to deferred wasm
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This tracks the possible contents in the entire program all at once using a single IR.
That is in contrast to say DeadArgumentElimination of LocalRefining etc., all of whom
look at one particular aspect of the program (function params and returns in DAE,
locals in LocalRefining). The cost is to build up an entire new IR, which takes a lot
of new code (mostly in the already-landed PossibleContents). Another cost
is this new IR is very big and requires a lot of time and memory to process.
The benefit is that this can find opportunities that are only obvious when looking
at the entire program, and also it can track information that is more specialized
than the normal type system in the IR - in particular, this can track an ExactType,
which is the case where we know the value is of a particular type exactly and not
a subtype.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the basic infrastructure for the full WAT parser with just enough
detail to parse basic modules that contain only imported globals. Parsing
functions correspond to elements of the grammar in the text specification and
are templatized over context types that correspond to each phase of parsing.
Errors are explicitly propagated via `Result<T>` and `MaybeResult<T>` types.
Follow-on PRs will implement additional phases of parsing and parsing for new
elements in the grammar.
|
|
|
|
|
|
|
|
| |
We have some possible use cases for this pass, and so are restoring
it.
This reverts the removal in #3261, fixes compile errors in internal API
changes since then, and flips the direction of the stack for the
wasm backend.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This optimizes constants in the megamorphic case of two: when we
know two function references are possible, we could in theory emit this:
(select
(ref.func A)
(ref.func B)
(ref.eq
(..ref value..) ;; globally, only 2 things are possible here, and one has
;; ref.func A as its value, and the other ref.func B
(ref.func A))
That is, compare to one of the values, and emit the two possible values there.
Other optimizations can then turn a call_ref on this select into an if over
two direct calls, leading to devirtualization.
We cannot compare a ref.func directly (since function references are not
comparable), and so instead we look at immutable global structs. If we
find a struct type that has only two possible values in some field, and
the structs are in immutable globals (which happens in the vtable case
in j2wasm for example), then we can compare the references of the struct
to decide between the two values in the field.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new signature-pruning pass that prunes parameters from
signature types where those parameters are never used in any function
that has that type. This is similar to DeadArgumentElimination but works
on a set of functions, and it can handle indirect calls.
Also move a little code from SignatureRefining into a shared place to
avoid duplication of logic to update signature types.
This pattern happens in j2wasm code, for example if all method functions
for some virtual method just return a constant and do not use the this
pointer.
|
|
|
| |
See https://github.com/WebAssembly/extended-const
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Merge similar functions that only differs constant values (like immediate
operand of const and call insts) by parameterization.
Performing this pass at post-link time can merge more functions across
objects. Inspired by Swift compiler's optimization which is derived from
LLVM's one:
https://github.com/apple/swift/blob/main/lib/LLVMPasses/LLVMMergeFunctions.cpp
https://github.com/llvm/llvm-project/blob/main/llvm/docs/MergeFunctions.rst
The basic ideas here are constant value parameterization and direct callee
parameterization by indirection.
Constant value parameterization is like below:
;; Before
(func $big-const-42 (result i32)
[[many instr 1]]
(i32.const 44)
[[many instr 2]]
)
(func $big-const-43 (result i32)
[[many instr 1]]
(i32.const 45)
[[many instr 2]]
)
;; After
(func $byn$mgfn-shared$big-const-42 (result i32)
[[many instr 1]]
(local.get $0) ;; parameterized!!
[[many instr 2]]
)
(func $big-const-42 (result i32)
(call $byn$mgfn-shared$big-const-42
(i32.const 42)
)
)
(func $big-const-43 (result i32)
(call $byn$mgfn-shared$big-const-42
(i32.const 43)
)
)
Direct callee parameterization is similar to the constant value parameterization,
but it parameterizes callee function i by ref.func instead. Therefore it is enabled
only when reference-types and typed-function-references features are enabled.
I saw 1 ~ 2 % reduction for SwiftWasm binary and Ruby's wasm port
using wasi-sdk, and 3 ~ 4.5% reduction for Unity WebGL binary when -Oz.
|
|
|
|
|
| |
Introduce static consts with PassOptions Defaults.
Add assertion to verify that the default options are the Os options.
Also update the text in relevant tests.
|
|
|
|
|
|
|
| |
Add an option for running the asyncify transformation on the primary module
emitted by wasm-split. The idea is that the placeholder functions should be able
to unwind the stack while the secondary module is asynchronously loaded, then
once the placeholder functions have been patched out by the secondary module the
stack should be rewound and end up in the correct secondary function.
|
|
|
|
|
|
|
|
|
|
| |
Add support for isorecursive types to wasm-fuzz-types by generating recursion
groups and ensuring that children types are only selected from candidates
through the end of the current group. For non-isorecursive systems, treat all
the types as belonging to a single group so that their behavior is unchanged.
Also fix two small bugs found by the fuzzer: LUB calculation was taking the
wrong path for isorecursive types and isorecursive validation was not handling
basic heap types properly.
|
| |
|
| |
|
| |
|
|
|
|
| |
After emscripten-core/emscripten#15905 lands Emscripten will no longer use it,
and nothing else needs it AFAIK.
|
|
|
|
|
| |
Eventually this will enable the isorecursive hybrid type system described in
https://github.com/WebAssembly/gc/pull/243, but for now it just throws a fatal
error if used.
|
|
|
|
|
|
| |
This is useful for the case where we might want to finalize
without extracting metadata.
See: https://github.com/emscripten-core/emscripten/pull/15918
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default wasm-ctor-eval removes exports that it manages to completely
eval (if it just partially evals then the export remains, but points to a function
with partially-evalled contents). However, in some cases we do want to keep
the export around even so, for example during fuzzing (as the fuzzer wants
to call the same exports before and after wasm-ctor-eval runs) and also
if there is an ABI we need to preserve (like if we manage to eval all of
main()), or if the function returns a value (which we don't support yet, but
this is a PR to prepare for that).
Specifically, there is now a new option:
--kept-exports foo,bar
That is a list of exports to keep around.
Note that when we keep around an export after evalling the ctor we
make the export point to a new function. That new function just
contains a nop, so that nothing happens when it is called. But the
original function is kept around as it may have other callers, who we
do not want to modify.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is meant to address one of the main limitations of wasm-ctor-eval in
emscripten atm, that libc++ global ctors will read env vars, which means they
call an import, which stops us from evalling,
emscripten-core/emscripten#15403 (comment)
To handle that, this adds an option to ignore external input. When set, we can
assume that no env vars will be read, no reading from stdin, no arguments to
main(), etc. Perhaps these could each be separate options, but I think keeping it
simple for now might be good enough.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The general shape of the --help output is now:
========================
wasm-foo
Does the foo operation
========================
wasm-foo opts:
--------------
--foo-bar ..
Tool opts:
----------
..
The options are now in categories, with the more specific ones - most likely to be
wanted by the user - first. I think this makes the list a lot less confusing.
In particular, in wasm-opt all the opt passes are now in their own category.
Also add a script to make it easy to update the help tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is fairly short and simple after the recent refactorings. This basically
just finds all uses of each signature/function type, and then sees if it
receives more specific types as params. It then rewrites the types if so.
This just handles arguments so far, and not return types.
This differs from DeadArgumentElimination's refineArguments() in that
that pass modifies each function by itself, changing the type of the
function as needed. That is only valid if the type is not observable, that
is, if the function is called indirectly then DAE ignores it. This pass will
work on the types themselves, so it considers all functions sharing a
type as a whole, and when it upgrades that type it ends up affecting them
all.
This finds optimization opportunities on 4% of the total signature
types in j2wasm. Those lead to some benefits in later opts, but the
effect is not huge.
|
|
|
|
|
|
|
|
| |
Fairly simple, this uses the existing infrastructure to find opportunities
to refine the type of a global variable. This a common pattern in j2wasm
for example, where a global begins as a null of $java.lang.Object (the
least specific type) but it is in practice always assigned an object of
some specific type.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This specializes the fields of structs based on the types written to them. That is,
if a field is of type A but in practice we always write some subtype B to it
then we can change the type of the field to that.
On j2wasm this manages to improve at least one field in 2% of types. Not a
large amount, but this does lead to further benefits in later opts (e.g. about a third
of the improvements are to turn a field non-nullable).
|
|
|
|
|
|
|
|
|
| |
Just as the --nominal flag forces all types to be parsed as nominal, the
--structural flag forces all types to be parsed as equirecursive. This is the
current default behavior, but a future PR will change the default to parse types
as either structural or nominal according to their syntax or encoding. This new
flag will then be necessary to get the current behavior.
Also take this opportunity to deduplicate more flags in the help tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new pass to perform global type optimization. So far this just
does one thing, to find fields with no struct.set and to turn them
immutable (where possible - sub and supertypes must agree).
To do that, this adds a GlobalTypeRewriter utility which rewrites
all the heap types in the module, allowing changes while doing so.
In this PR, the change is to flip the mutable field. Otherwise, the
utility handles all the boilerplate of creating temp heap types using
a TypeBuilder, and it handles replacing the types in every place
they are used in the module.
This is not enabled by default yet as I don't see enough of a benefit
on j2cl. This PR is basically the simplest thing to do in the space of
global type optimization, and the simplest way I can think of to
fully test the GlobalTypeRewriter (which can't be done as a unit
test, really, since we want to emit a full module and validate it etc.).
This PR builds the foundation for more complicated things like
removing unused fields, subtyping fields, and more.
|
|
|
|
|
| |
Locally I saw a 10% speedup on j2cl but reports of regressions have
arrived, so let's disable it for now pending investigation. The option added
here should make it easy to experiment.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the set of functions to keep was initially empty, then the profile
added new functions to keep, then the --keep-funcs functions were added, then
the --split-funcs functions were removed. This method of composing these
different options was arbitrary and not necessarily intuitive, and it prevented
reasonable workflows from working. For example, providing only a --split-funcs
list would result in all functions being split out not matter which functions
were listed.
To make the behavior of these options, and --split-funcs in particular, more
intuitive, disallow mixing them and when --split-funcs is used, split out only
the listed functions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An "intrinsic" is modeled as a call to an import. We could also add new
IR things for them, but that would take more work and lead to less
clear errors in other tools if they try to read a binary using such a
nonstandard extension.
A first intrinsic is added here, call.without.effects This is basically the same
as call_ref except that the optimizer is free to assume the call has no
side effects. Consequently, if the result is not used then it can be optimized
out (as even if it is not used then side effects could have kept it around).
Likewise, the lack of side effects allows more reordering and other
things.
A lowering pass for intrinsics is provided. Rather than automatically
lower them to normal wasm at the end of optimizations, the user must
call that pass explicitly. A typical workflow might be
-O --intrinsic-lowering -O
That optimizes with the intrinsic present - perhaps removing calls
thanks to it - then lowers it into normal wasm - it turns into a call_ref -
and then optimizes further, which would turns the call_ref into a
direct call, potentially inline, etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To avoid requiring a static memory allocation, wasm-split's instrumentation
defaults to recording profile data in Wasm globals. This causes problems for
multithreaded applications because the globals are thread-local, but it is not
always feasible to arrange for a separate profile to be dumped on each thread.
To simplify the profiling of such multithreaded applications, add a new
instrumentation mode that stores the profiling data in shared memory instead of
in globals. This allows a single profile to be written that correctly reflects
the called functions on all threads.
This new mode is not on by default because it requires users to ensure that the
program will not trample the in-memory profiling data. The data is stored
beginning at address zero and occupies one byte per declared function in the
instrumented module. Emscripten can be told to leave this memory free using the
GLOBAL_BASE option.
|