| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This finds types that can be merged into their super: types that add no
fields, and are not used in casts, etc. - so we might as well use the super.
This complements TypeSSA, in that it can merge back the new types that
TypeSSA created, if we never found a use for them. Without this, TypeSSA
can bloat binary size quite a lot (I see 10-20%).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This creates new nominal types for each (interesting) struct.new. That then allows
type-based optimizations to be more precise, as those optimizations will track
separate info for each struct.new, in effect. That is kind of like SSA, however, we
do not handle merges. For example:
x = struct.new $A (5);
print(x.value);
y = struct.new $A (11);
print(y.value);
// => //
x = struct.new $A.x (5);
print(x.value);
y = struct.new $A.y (11);
print(y.value);
After the pass runs each of those struct.new creates a unique type, and type-based
analysis can see that 5 or 11 are the only values written in that type (if nothing else
writes there).
This bloats the type section with the new subtypes, so it is best used with a pass
to merge unneeded duplicate types, which a later PR will add. That later PR will
exactly merge back in the types created here, which are nominally different but
indistinguishable otherwise.
This pass is not enabled by default. It's not clear yet where is the best place to do it,
as it must be balanced by type merging, but it might be better to do multiple
rounds of optimization between the two. Needs more investigation.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we implemented bottom heap types, `ref.null` had to be annotated with
specific types. The `LUBFinder` utility ignored these types so that it could
find the best LUB from all considered non-null expressions, then go back and
update the type annotations on the nulls to match that LUB. Now that we have
bottom types, however, none of that is necessary, and in fact ignoring nulls can
miss possible refinements to bottom types.
Update and simplify `LUBFinder` so that it is a simple wrapper around the
underlying `Type::getLeastUpperBound` utility with no additional logic. Update
tests to account for the more powerful optimizations.
|
|
|
|
| |
The caller fills the map with what is relevant anyhow, so we don't need to check
before looking for an item in the map.
|
|
|
|
|
|
|
|
|
|
|
| |
The logic that is split out into mapTypes gets a map of old type => new type and then
updates the module to replace the old with the new.
This will be useful in a future pass to merge types. We will tell it to map the types to
be merged with the types we want to merge them into.
This removes an assert on all types being in the map to allow some of them not to be.
(the new pass that will use this will only want to map some types).
|
|
|
|
| |
Equirecursive is no longer standards track and its implementation is extremely
complex. Remove it.
|
|
|
| |
This reflects that naming used in the spec.
|
| |
|
|
|
|
| |
If the target is a bottom type then it is a heap type but it is not a signature
type, and we should treat it as unreachable (and not crash).
|
|
|
|
| |
This is more modern and (IMHO) easier to read than that old C typedef
syntax.
|
|
|
|
| |
All that code did was filter contents by the type of the RefCast. We do that for all
expressions now, so it was redundant.
|
|
|
|
|
|
|
|
|
|
| |
We don't actually have the distributive property since our
PossibleContents representation is an approximation, and the fuzzer found
a case where that is noticeable. See more details in the new comment +
testcase.
I measured speed and memory usage and this actually causes almost no
noticeable change.
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, we forgot to note the type annotation on `ArrayNewSeg` instructions, so
in small modules where these are the only annotated instructions, the type
section would be incomplete.
Second, in the interpreter we were reserving space for the array before checking
that the segment access was valid. This could cause huge allocations that threw
bad_alloc exceptions before the interpreter could get around to trapping. Fix
the problem by reserving the array after validating the arguements.
Fixes #5236.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Monomorphization finds cases where we send more refined types to a function
than it declares. In such cases we can copy the function and refine the parameters:
// B is a subtype of A
foo(new B());
function foo(x : A) { ..}
=>
foo_B(new B()); // call redirected to refined copy
function foo(x : A) { ..} // unchanged
function foo_B(x : B) { ..} // refined copy
This increases code size so it may not be worth it in all cases. This initial PR is
hopefully enough to start experimenting with this on performance, and so it does
not enable the pass by default.
This adds two variations of monomorphization, one that always does it, and the
default which is "careful": it sees whether monomorphizing lets the refined function
actually be better than the original (say, by removing a cast). If there is no
improvement then we do not make any changes. This saves a significant amount
of code size - on j2wasm the careful version increases by 13% instead of 20% -
but it does run more slowly obviously.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update MemoryPacking for array.new_data
The MemoryPacking pass looks at all instructions that reference memory segments
to determine how they can be optimized. #5214 introduced a new instruction that
references memory segments, array.new_data, but did not update MemoryPacking
accordingly. This omission meant that MemoryPacking could produce invalid or
misoptimized modules in the presence of array.new_data.
Fix the problem by making MemoryPacking aware of array.new_data. Consider
array.new_data when determining whether a segment is used and update
array.new_data to reflect the new, optimized segment numberings afterward. To
keep things simple, do not try to split any segment that is referred to by
a array.new_data instruction.
* fix
* Add test explanations
* Fix possible-contents.h for `array.new_{data,elem}`
This code was not properly updated in #5214, so GUFA would incorrectly optimize
out `array.new_data` and `array.new_elem` instructions. Fix the problem by
making these instructions data flow roots.
* fix
* move tests
|
|
|
|
|
|
|
|
|
| |
In order to test them, fix the binary and text parsers to accept passive data
segments even if a module has no memory. In addition to parsing and emitting the
new instructions, also implement their validation and interpretation. Test the
interpretation directly with wasm-shell tests adapted from the upstream spec
tests. Running the upstream spec tests directly would require fixing too many
bugs in the legacy text parser, so it will have to wait for the new text parser
to be ready.
|
|
|
|
|
|
|
| |
These operations emit a completely different type than their input, so they must be
marked as roots, and not as things that flow values through them (because then
we filter everything out as the types are not compatible).
Fixes #5219
|
|
|
|
|
|
|
|
|
| |
If a heap type only ever appears as the result of a read, we must include it in
the analysis in ModuleUtils, even though it isn't written in the binary format.
Otherwise analyses using ModuleUtils can error on not finding all types in the
list of types.
Fixes #5180
|
|
|
|
|
|
|
| |
The fallthrough there is trickier because the value is evaluated before the condition.
Unlike other fallthroughs, the value is not last, so we need to check if the condition
(which is after it) interferes with it.
|
|
|
|
|
| |
This makes the logic symmetric and easier to read.
Measuring speed, this seems identical to before, so that concern seems fine.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Poking in the git history, this was some kind of hack for a problem that appears to
no longer exist since at least 5 years ago.
Logically, refinalize should never change a type from unreachable to none, or at
least if we have a place that does this, we should manually do the necessary things
around that, like updating the function's return type. The opposite, none (or anything
else) to unreachable is the common case, where we use refinalize to propagate
that type upwards.
Fuzzing also finds no issues.
|
|
|
|
|
| |
We already provided a specialization of `std::hash` for arbitrary pairs, so add
one for `std::tuple` as well. Use the new specialization where we were
previously using nested pairs just to be able to use the pair specialization.
|
|
|
|
|
|
|
|
|
|
|
| |
When we read from a struct/array using a cone type, read from the types in the cone
and nothing else. Previously we used the declared type in the wasm, which might be
larger (both in the base type and the depth). Likewise, in a write.
To do this, this extends ConeReadLocation with a depth (previously the depth there
was assumed to be infinite, and now it is to a potentially limited depth).
After this we are fully utilizing cone types in GUFA, as the test changes show (or at
least I can't think of any other uses of cones).
|
|
|
|
|
| |
modern nulls (#5154)
Modern nulls never compare equal unless they have the same type too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we have a cone type, we are able to represent in PossibleContents the
natural content of a wasm location: a type or any of its subtypes. This allows us to
enforce the wasm typing rules, that is, to filter the data arriving at a location by the
wasm type of the location.
Technically this could be unnecessary if we had full implementations of flowFoo
and so forth, that is, tailored code for each wasm expression that makes sure we
only contain and flow content that fits in the wasm type. Atm we don't have that,
and until the wasm spec stabilizes it's probably not worth the effort. Instead,
simply filter based on the type, which gives the same result (though it does take
a little more work; I measured it at 3% or so of runtime).
While doing so normalize cones to their actual maximum depth, which simplifies
things and will help more later as well.
|
|
|
| |
Instead of Many, use a proper Cone Type for the data, as appropriate.
|
|
|
|
|
|
| |
This computes how deep the children of a heap type are. This will be useful in
cone type optimizations, since we want to "normalize" cones: a cone of depth
infinity can just be a cone of the actual maximum depth of existing children, etc.,
and it's simpler to have a single canonical representation to avoid extra work.
|
|
|
| |
This will be useful in further cone type optimizations.
|
|
|
|
| |
Avoid allocating there. This is both faster and also it ensures we never modify
our internal data structure after our constructor.
|
|
|
|
|
| |
When the heap types are not subtypes of each other, but a null is possible, the
intersection exists and is a null. That null must be the shared bottom type.
|
|
|
|
|
|
|
|
|
|
|
| |
A cone type is a PossibleContents that has a base type and a depth, and it
contains all subtypes up to that depth. So depth 0 is an exact type from
before, etc.
This only adds cone type computations when combining types, that is, when we
combine two exact types we might get a cone, etc. This does not yet use the
cone info in all places (like struct gets and sets), and it does not yet define roots
of cone types, all of which is left for later. IOW this is the MVP of cone types that
is just enough to add them + pass tests + test the new functionality.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the goal of supporting null characters (i.e. zero bytes) in strings.
Rewrite the underlying interned `IString` to store a `std::string_view` rather
than a `const char*`, reduce the number of map lookups necessary to intern a
string, and present a more immutable interface.
Most importantly, replace the `c_str()` method that returned a `const char*`
with a `toString()` method that returns a `std::string`. This new method can
correctly handle strings containing null characters. A `const char*` can still
be had by calling `data()` on the `std::string_view`, although this usage should
be discouraged.
This change is NFC in spirit, although not in practice. It does not intend to
support any particular new functionality, but it is probably now possible to use
strings containing null characters in at least some cases. At least one parser
bug is also incidentally fixed. Follow-on PRs will explicitly support and test
strings containing nulls for particular use cases.
The C API still uses `const char*` to represent strings. As strings containing
nulls become better supported by the rest of Binaryen, this will no longer be
sufficient. Updating the C and JS APIs to use pointer, length pairs is left as
future work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we treated each local index as a location, and every local.set to
that index could be read by every local.get. With this we connect only
relevant sets to gets.
Practically speaking, this removes LocalLocation which is what was just
described, and instead there is ParamLocation for incoming parameter
values. And local.get/set use normal ExpressionLocations to connect a
set to a get.
I was worried this would be slow, since computing LocalGraph takes time,
but it actually more than makes up for itself on J2Wasm and we are faster
actually rocket I guess since we do less updating after local.sets.
This makes a noticeable change on the J2Wasm binary, and perhaps will
help with benchmarks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These types, `none`, `nofunc`, and `noextern` are uninhabited, so references to
them can only possibly be null. To simplify the IR and increase type precision,
introduce new invariants that all `ref.null` instructions must be typed with one
of these new bottom types and that `Literals` have a bottom type iff they
represent null values. These new invariants requires several additional changes.
First, it is now possible that the `ref` or `target` child of a `StructGet`,
`StructSet`, `ArrayGet`, `ArraySet`, or `CallRef` instruction has a bottom
reference type, so it is not possible to determine what heap type annotation to
emit in the binary or text formats. (The bottom types are not valid type
annotations since they do not have indices in the type section.)
To fix that problem, update the printer and binary emitter to emit unreachables
instead of the instruction with undetermined type annotation. This is a valid
transformation because the only possible value that could flow into those
instructions in that case is null, and all of those instructions trap on nulls.
That fix uncovered a latent bug in the binary parser in which new unreachables
within unreachable code were handled incorrectly. This bug was not previously
found by the fuzzer because we generally stop emitting code once we encounter an
instruction with type `unreachable`. Now, however, it is possible to emit an
`unreachable` for instructions that do not have type `unreachable` (but are
known to trap at runtime), so we will continue emitting code. See the new
test/lit/parse-double-unreachable.wast for details.
Update other miscellaneous code that creates `RefNull` expressions and null
`Literals` to maintain the new invariants as well.
|
|
|
|
|
|
|
|
|
| |
The last parameter is the function to call, and we treated it like a normal parameter.
This is mostly only an issue during debugging, but in theory sending this extra value
could cause us to optimize less later (since it gets added to what the local of that
index can contain).
Also add assertions which would have caught this before.
|
|
|
|
|
| |
Annotations on array.get and array.set were not being counted and the code could
generally be simplified since `count` already ignores types that don't need to
be counted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously only WalkerPasses had access to the `getPassRunner` and
`getPassOptions` methods. Move those methods to `Pass` so all passes can use
them. As a result, the `PassRunner` passed to `Pass::run` and
`Pass::runOnFunction` is no longer necessary, so remove it.
Also update `Pass::create` to return a unique_ptr, which is more efficient than
having it return a raw pointer only to have the `PassRunner` wrap that raw
pointer in a `unique_ptr`.
Delete the unused template `PassRunner::getLast()`, which looks like it was
intended to enable retrieving previous analyses and has been in the code base
since 2015 but is not implemented anywhere.
|
|
|
|
| |
Avoid manually doing bitshifts etc. - leave combining to the core hash
logic, which can do a better job.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does not actually add cone types, but it does NFC refactoring towards that.
Specifically it replaces the internal ExactType with ConeType, and the latter
has a depth, so a cone type of depth 0 is the old exact type.
Cone types with depth > 0 are not possible yet, keeping this NFC.
I believe this design with a depth for cone types has little overhead. It does add
to the size of ConeType, but the variant there is larger anyhow (it contains a
Literal). And things like isSubType need to loop anyhow, so looping up to the
depth etc. in checks won't make things slower.
|
|
|
|
|
| |
We compared types and not heap types, so a difference in nullability
confused us. But at that point in the code, we've ruled out nulls, so we
should focus on heap types only.
|
|
|
| |
Move the logic to the GUFA pass.
|
|
|
|
|
|
|
|
|
| |
This moves the logic to add connections from signatures to functions from the top level
into the RefFunc logic. That way we only add those connections to functions that
actually have a RefFunc, which avoids us thinking that a function without one can be
reached by a call_ref of its type.
Has a small but non-zero benefit on j2wasm.
|
|
|
|
| |
If the PossibleContents for the two sides have no possible intersection then the
result must be 0.
|
|
|
| |
Fixes #5041
|
|
|
|
|
|
| |
The GC spec has been updated to have heap type annotations on call_ref and
return_call_ref. To avoid breaking users, we will have a graceful, multi-step
upgrade to the annotated version of call_ref, but since return_call_ref has no
users yet, update it in a single step.
|
|
|
|
|
|
| |
Similar to ref.cast slightly, but simpler.
Also update some TODO text.
|
| |
|