| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This was never right for over a decade, and just never used I suppose... it should
have been called "take" since it grabbed data from the other item and then set
that other item to empty. Fix it so it swaps properly.
|
|
|
|
|
|
| |
Since these types may be carrying errors that need to be handled or
propagated, it is always an error not to use them in some way. Adding
the [[nodiscard]] attribute caused the compiler to find a few instances
where we were incorrectly ignoring results. Fix these places.
|
|
|
|
|
|
|
|
|
|
| |
When IRBuilder builds an empty non-block scope such as a function body,
an if arm, a try block, etc, it needs to produce some expression to
represent the empty contents. Previously it produced a nop, but change
it to produce an empty block instead. The binary writer and printer have
special logic to elide empty blocks, so this produces smaller output.
Update J2CLOpts to recognize functions containing empty blocks as
trivial to avoid regressing one of its tests.
|
|
|
|
|
| |
This new API lets us ask if a set can be safely moved to a new position. The new
position must be the location of an expression from a particular class (this
allows us to populate the IR once and then query any of those locations).
|
|
|
|
|
| |
Now that the header includes topological sort utilities in addition to
the utility that iterates over all topological orders, it makes more
sense for it to be named topological_sort.h
|
|
|
|
|
|
| |
This saves memory and could in principle improve performance, although a
quick experiment with 30 samples on ReorderGlobals did not yield a
statistically significant improvement. At any rate, using Index is more
consistent with other parts of the code base.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than finding the minimum sort with respect to the original order
of vertices, find the minimum sort with respect to an arbitrary
user-provided comparator. Users of the minSort utility previously had to
sort their input graphs according to their desired ordering, but now
they can simply provide their comparator instead.
Take advantage of the new functionality in ReorderGlobals and also
standardize on a single data type for representing dependence graphs to
avoid unnecessary conversions. Together, these changes slightly speed up
ReorderGlobals.
Move the topological sort code previously in a .cpp file into the header
so the comparator can be provided as a lambda template parameter instead
of as a `std::function`. This makes ReorderGlobals about 5% faster.
|
|
|
|
|
|
|
| |
Previously they were structs and their results were accessed with
`operator*()`, but that was unnecessarily complicated and could lead to
problems with temporary lifetimes being too short. Simplify the
utilities by making them functions. This also allows the wrapper
templates to infer the proper element types automatically.
|
|
|
|
|
|
|
|
| |
Reuse the code implementing Kahn's topological sort algorithm with a new
configuration that uses a min-heap to always choose the best available
element.
Also add wrapper utilities that can find topological sorts of graphs
with arbitrary element types, not just indices.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we just had a map that people would access with localGraph.getSetses[get],
while now it is a call localGraph.getSets(get), which more nicely hides the internal
implementation details.
Also rename getSetses => getSetsMap.
This will allow a later PR to optimize the internals of this API.
This is performance-neutral as far as I can measure. (We do replace a direct read
from a data structure with a call, but the call is in a header and should always get
inlined.)
|
|
|
|
|
| |
Single-segment mappings were already handled in readNextDebugLocation,
but not in readSourceMapHeader.
|
|
|
|
|
|
|
|
|
| |
Use an extension of Kahn's algorithm for finding topological orders that
iteratively makes every possible choice at every step to find all the
topological orders. The order being constructed and the set of possible
choices are managed in-place in the same buffer, so the algorithm takes
linear time and space plus amortized constant time per generated order.
This will be used in an upcoming type optimization.
|
|
|
|
| |
This will be used in an upcoming type optimization pass and may be
generally useful.
|
|
|
|
|
|
|
|
|
| |
Implement a non-recursive version of Tarjan's Strongly Connected
Component algorithm that consumes and produces iterators for maximum
flexibility.
This will be used in an optimization that transforms the heap type graph
to use minimal recursion groups, which correspond to the strongly
connected components of the type graph.
|
|
|
| |
Fixes #6776.
|
|
|
|
|
|
|
|
|
|
|
| |
Implement binary and text parsing and printing of shared basic heap types and
incorporate them into the type hierarchy.
To avoid the massive amount of code duplication that would be necessary if we
were to add separate enum variants for each of the shared basic heap types, use
bit 0 to indicate whether the type is shared and replace `getBasic()` with
`getBasic(Unshared)`, which clears that bit. Update all the use sites to record
whether the original type was shared and produce shared or unshared output
without code duplication.
|
|
|
|
|
|
|
| |
Since the BasicHeapTypes are in an enum, calling HeapType methods on them
requires something like `HeapType(HeapType::func).someMethod()`. This is
unnecessarily verbose, so add a new `HeapTypes` namespace that contains
constexpr HeapType globals that can be used instead, shorting this to
`HeapTypes::func.someMethod()`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Parse the text format for shared composite types as described in the
shared-everything thread proposal. Update the parser to use 'comptype' instead
of 'strtype' to match the final GC spec and add the new syntactic class
'sharecomptype'.
Update the type canonicalization logic to take sharedness into account to avoid
merging shared and unshared types. Make the same change in the TypeMerging pass.
Ensure that shared and unshared types cannot be in a subtype relationship with
each other.
Follow-up PRs will add shared abstract heap types, binary parsing and emitting
for shared types, and fuzzer support for shared types.
|
|
|
|
|
| |
Remove `SExpressionParser`, `SExpressionWasmBuilder`, and `cashew::Parser`.
Simplify gen-s-parser.py. Remove the --new-wat-parser and
--deprecated-wat-parser flags.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The stringview types from the stringref proposal have three irregularities that
break common invariants and require pervasive special casing to handle properly:
they are supertypes of `none` but not subtypes of `any`, they cannot be the
targets of casts, and they cannot be used to construct nullable references. At
the same time, the stringref proposal has been superseded by the imported
strings proposal, which does not have these irregularities. The cost of
maintaing and improving our support for stringview types is no longer worth the
benefit of supporting them.
Simplify the code base by entirely removing the stringview types and related
instructions that do not have analogues in the imported strings proposal and do
not make sense in the absense of stringviews.
Three remaining instructions, `stringview_wtf16.get_codeunit`,
`stringview_wtf16.slice`, and `stringview_wtf16.length` take stringview operands
in the stringref proposal but cannot be removed because they lower to operations
from the imported strings proposal. These instructions are changed to take
stringref operands in Binaryen IR, and to allow a graceful upgrade path for
users of these instructions, the text and binary parsers still accept but ignore
`string.as_wtf16`, which is the instruction used to convert stringrefs to
stringviews. The binary writer emits code sequences that use scratch locals and `string.as_wtf16` to keep the output valid.
Future PRs will further align binaryen with the imported strings proposal
instead of the stringref proposal, for example by making `string` a subtype of
`extern` instead of a subtype of `any` and by removing additional instructions
that do not have analogues in the imported strings proposal.
|
|
|
|
|
|
|
| |
Some of the example and gtest tests parse wat. Update them to use the new wat
parser, fixing problems with their text input. In one case, comment out an
outlining test entirely for now because the new parser produces different IR
that changes the test output, but it is not obvious how to understand and fix
the test.
|
|
|
|
| |
Disallow returns from having any children, even unreachable children, in
function that do not return any values.
|
|
|
|
|
|
|
|
|
|
|
| |
The lexer currently lexes tokens eagerly and stores them in a `Token` variant
ahead of when they are actually requested by the parser. It is wasteful,
however, to classify tokens before they are requested by the parser because it
is likely that the next token will be precisely the kind the parser requests.
The work of checking and rejecting other possible classifications ahead of time
is not useful.
To make incremental progress toward removing `Token` completely, lex parentheses
on demand instead of eagerly.
|
|
|
|
|
|
|
|
| |
This PR is part of a series that adds basic support for the typed
continuations/wasmfx proposal.
This particular PR adds cont and nocont as top and bottom types for
continuation types, completely analogous to func and nofunc for function types
(also: exn and noexn).
|
|
|
|
|
|
| |
The stringview types (`stringview_wtf8`, `stringview_wtf16`, and
`stringview_iter`) are not subtypes of `any` even though they are supertypes of
`none`. This breaks the type system invariant that types share a bottom type iff
they share a top type, but we can work around that.
|
|
|
|
|
|
|
|
|
|
|
| |
The lexer was previously an iterator over tokens, but that expressivity is not
actually used in the parser. Instead, we have `input.h` that adapts the token
iterator interface into an iterface that is actually useful.
As a first step toward simplifying the lexer implementation to no longer be an
iterator over tokens, update its interface by moving the adaptation from input.h
to the lexer itself. This requires extensive changes to the lexer unit tests,
which will not have to change further when we actually simplify the lexer
implementation.
|
|
|
|
|
|
|
|
|
|
| |
In addition to normal identifiers, support parsing identifiers of the format
`$"..."`. This format is not yet allowed by the standard, but it is a popular
proposed extension (see https://github.com/WebAssembly/spec/issues/617 and
https://github.com/WebAssembly/annotations/issues/21).
Binaryen has historically allowed a similar format and has supported arbitrary
non-standard identifier characters, so it's much easier to support this extended
syntax than to fix everything to use the restricted standard syntax.
|
|
|
|
|
|
| |
Have a single implementation for lexing each of unsigned, signed, and
uninterpreted integers, each generic over the bit width of the integer. This
reduces duplication in the existing code and it will make it much easier to
support lexing more 8- and 16-bit integers.
|
| |
|
|
|
|
| |
Instead of e.g. `(i32 i32)`, use `(tuple i32 i32)`. Having a keyword to
introduce the s-expression is more consistent with the rest of the language.
|
|
|
|
|
|
|
|
|
|
|
|
| |
We previously supported (and primarily used) a non-standard text format for
conditionals in which the condition, if-true expression, and if-false expression
were all simply s-expression children of the `if` expression. The standard text
format, however, requires the use of `then` and `else` forms to introduce the
if-true and if-false arms of the conditional. Update the legacy text parser to
require the standard format and update all tests to match. Update the printer to
print the standard format as well.
The .wast and .wat test inputs were mechanically updated with this script:
https://gist.github.com/tlively/85ae7f01f92f772241ec994c840ccbb1
|
|
|
|
|
|
| |
Checking a couple of testing TODOs off and adding more tests of the outlining pass for outlining:
- a sequence at the beginning of an existing function
- a sequence that is outlined into a function that takes no arguments
- multiple sequences from the same source function into different outlined functions
|
|
|
| |
Adds an outlining pass that performs outlining on a module end to end, and two tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously CFGWalker designated a particular block as the "exit" block, but it
was just the block that happened to appear at the end of the function that
returned values by implicitly flowing them out. That exit block was not tied in
any way to other blocks that might end in returns, so analyses that needed to
perform some action at the end of the function would have had to perform that
action at the end of the designated exit block but also separately at any return
instruction.
Update CFGWalker to make the exit block a synthetic empty block that is a
successor of all other blocks tthat implicitly or explicitly return from the
function in case there are multiple such blocks, or to make the exit block the
single returning block if there is only one. This means that analyses will only
perform their end-of-function actions at the end of the exit block rather than
additionally at every return instruction.
|
|
|
|
|
|
|
|
|
| |
Previously, modifying a single vector element of a `Shared<Vector>` element
required materializing a full vector to do the join. When there is just a single
element to update, materializing all the other elements with bottom value is
useless work. Add a `Vector<L>::SingletonElement` utility that represents but
does not materialize a vector with a single non-bottom element and allow it to
be passed to `Vector<L>::join`. Also update `Shared` and `Inverted` so that
`SingletonElement` joins still work on vectors wrapped in those other lattices.
|
|
|
|
|
|
|
| |
Remove the ability to represent the top element of the stack lattice since it
isn't necessary. Also simplify the element type to be a simple vector, update
the lattice method implementations to be more consistent with implementations in
other lattices, and make the tests more consistent with the tests for other
lattices.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The analysis framework stores a separate lattice element for each basic block
being analyzed to represent the program state at the beginning of the block.
However, in many analyses a significant portion of program state is not
flow-sensitive, so does not benefit from having a separate copy per block. For
example, an analysis might track constraints on the types of locals that do not
vary across blocks, so it really only needs a single copy of the constrains for
each local. It would be correct to simply duplicate the state across blocks
anyway, but it would not be efficient.
To make it possible to share a single copy of a lattice element across basic
blocks, introduce a `Shared<L>` lattice. Mathematically, this lattice represents
a single ascending chain in the underlying lattice and its elements are ordered
according to sequence numbers corresponding to positions in that chain.
Concretely, though, the `Shared<L>` lattice only ever materializes a single,
monotonically increasing element of `L` and all of its elements provide access
to that shared underlying element.
`Shared<L>` will let us get the benefits of having mutable shared state in the
concrete implementation of analyses without losing the benefits of keeping those
analyses expressible purely in terms of the monotone framework.
|
|
|
|
|
|
| |
Many of the lattice tests were essentially copy-pasted from one lattice to the
next because they all tested isomorphic subsets of the various lattices,
specifically in the shape of a diamond. Refactor the code so that all lattices
that have tests of this shape use the same utility test functions.
|
|
|
|
|
|
| |
Add a lattice that is a thin wrapper around `wasm::Type` giving it the interface
of a lattice. As usual, `Type::unreachable` is the bottom element, but unlike in
the underlying API, we uniformly treat `Type::none` as the top type so that we
have a proper lattice.
|
|
|
|
|
|
|
| |
This lattice combines any number of other lattices into a single lattice whose
elements are tuples of elements of the other lattices. This will be one of the
most important lattices in the analysis framework because it will be used to
combine information about different parts of the program, e.g. locals and the
value stack, into a single lattice.
|
|
|
|
|
|
|
|
|
|
| |
The vector lattice is nearly identical to the array lattice, except that the
size of the elements is specified at runtime when the lattice object is created
rather than at compile time. The code and tests are largely copy-pasted and
fixed up from the array implementation, but there are a couple differences.
First, initializing vector elements does not need the template magic used to
initialize array elements. Second, the obvious implementations of join and meet
do not work for vectors of bools because they might be specialized to be bit
vectors, so we need workarounds for that particular case.
|
|
|
|
|
| |
The elements of `Array<L, N>` lattice are arrays of length `N` of elements of
`L`, compared pairwise with each other. This lattice is a concrete
implementation of what would be written L^N with pen and paper.
|
|
|
|
| |
This lattice "lifts" another lattice by inserting a new bottom element
underneath it.
|
|
|
|
|
| |
Given a type `T`, `Flat<T>` is the lattice where none of the values of `T` are
comparable except with themselves, but they are all greater than a common bottom
element not in `T` and less than a common top element also not in `T`.
|
|
|
|
|
|
| |
The FullLattice concept extends the base Lattice with `getTop` and `meet`
operations. The `Inverted` lattice uses these operations to reverse the order of
an arbitrary full lattice, for example to create a lattice of integers ordered
by `>` rather than by `<`.
|
|
|
| |
Implement a generic lattice template for integral types ordered by `<`.
|
|
|
|
|
| |
This is a lattice with two elements: `false` is bottom and `true` is top.
Add a new gtest file for testing lattices.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lattice (#6035)
In general, the operation of taking the least upper bound of two lattice
elements may depend on some state stored in the lattice object rather than in
the elements themselves. To avoid forcing the elements to be larger and more
complicated in that case (by storing a parent pointer back to the lattice), move
the least upper bound operation to make it a method of the lattice rather than
the lattice element. This is also more consistent with where we put e.g. the
`compare` method.
While we are at it, rename `makeLeastUpperBound` to `join`, which is much
shorter and nicer. Usually we avoid using esoteric mathematical jargon like
this, but "join" as a normal verb actually describes the operation nicely, so I
think it is ok in this case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simplify the monotone analyzer by replacing all the state it used to store in
`BlockState` with a simple vector of lattice elements. Use simple indices to
refer to both blocks and their associated states in the vector. Remove the
ability for transfer functions to control the initial enqueued order of basic
blocks since that was a leaky abstraction. Replace the worklist with a
UniqueDeferredQueue since that has generally proven to be more efficient in
smiilarly contexts, and more importantly, it has a nicer API. Make miscellaneous
simplifications to other code as well.
Delete a few unit tests that exposed the order in which blocks were analyzed
because they printed intermediate results. These tests should be replaced with
tests of analyses' public APIs in the future.
|
|
|
|
|
| |
The problem was if you construct a try expression which references a nonexistent tag in
one of its catch blocks, the validation code successfully identified the null pointer but
then proceeded to try to read from it.
|