| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
This sends --closed-world to wasm-opt from the fuzzer, when we use that
flag (before we just used it on optimizations, but not fuzz generation). And
TranslateToFuzzReader now stores a boolean about whether we are in closed-
world mode or not.
This has no effect so far, and is a refactoring for a later PR, where we must
generate code differently based on whether we are in closed-world mode
or not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main addition here is a bundle_clusterfuzz.py script which will package up
the exact files that should be uploaded to ClusterFuzz. It also documents the
process and bundling and testing. You can do
bundle.py OUTPUT_FILE.tgz
That bundles wasm-opt from ./bin., which is enough for local testing. For
actually uploading to ClusterFuzz, we need a portable build, and @dschuff
had the idea to reuse the emsdk build, which works nicely. Doing
bundle.py OUTPUT_FILE.tgz --build-dir=/path/to/emsdk/upstream/
will bundle wasm-opt (+libs) from the emsdk. I verified that those builds
work on ClusterFuzz.
I added several forms of testing here. First, our main fuzzer fuzz_opt.py now
has a ClusterFuzz testcase handler, which simulates a ClusterFuzz environment.
Second, there are smoke tests that run in the unit test suite, and can also be
run separately:
python -m unittest test/unit/test_cluster_fuzz.py
Those unit tests can also run on a given bundle, e.g. one created from an
emsdk build, for testing right before upload:
BINARYEN_CLUSTER_FUZZ_BUNDLE=/path/to/bundle.tgz python -m unittest test/unit/test_cluster_fuzz.py
A third piece of testing is to add a --fuzz-passes test. That is a mode for
-ttf (translate random data into a valid wasm fuzz testcase) that uses random
data to pick and run a set of passes, to further shape the wasm. (--fuzz-passes
had no previous testing, and this PR fixes it and tidies it up a little, adding some
newer passes too).
Otherwise this PR includes the key run.py script that is bundled and then
executed by ClusterFuzz, basically a python script that runs wasm-opt -ttf [..]
to generate testcases, sets up their JS, and emits them.
fuzz_shell.js, which is the JS to execute testcases, will now check if it is
provided binary data of a wasm file. If so, it does not read a wasm file from
argv[1]. (This is needed because ClusterFuzz expects a single file for the
testcase, so we make a JS file with bundled wasm inside it.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike other module elements, types are not stored on the `Module`.
Instead, they are collected by traversing the IR before printing and
binary writing. The code that collects the types tries to optimize the
order of rec groups based on the number of times each type is used. As a
result, the output order of types generally has no relation to the input
order of types. In addition, most type optimizations rewrite the types
into a single large rec group, and the order of types in that group is
essentially arbitrary. Changes to the code for counting type uses,
sorting types, or sorting rec groups can yield very large changes in the
output order of types, producing test diffs that are hard to review and
potentially harming the readability of tests by moving output types away
from the corresponding input types.
To help make test output more stable and readable, introduce a tool
option that causes the order of output types to match the order of input
types as closely as possible. It is implemented by having the parsers
record the indices of the input types on the `Module` just like they
already record the type names. The `GlobalTypeRewriter` infrastructure
used by type optimizations associates the new types with the old indices
just like it already does for names and also respects the input order
when rewriting types into a large recursion group.
By default, wasm-opt and other tools clear the recorded type indices
after parsing the module, so their default behavior is not modified by
this change.
Follow-on PRs will use the new flag in more tests, which will generate
large diffs but leave the tests in stable, more readable states that
will no longer change due to other changes to the optimizing type
sorting logic.
|
|
|
|
|
| |
Remove `SExpressionParser`, `SExpressionWasmBuilder`, and `cashew::Parser`.
Simplify gen-s-parser.py. Remove the --new-wat-parser and
--deprecated-wat-parser flags.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We settled on the name `WASM_EXNREF` for the new setting in Emscripten
for the name for the new EH option.
https://github.com/emscripten-core/emscripten/blob/2bc5e3156f07e603bc4f3580cf84c038ea99b2df/src/settings.js#L782-L786
"New EH" sounds vague and I'm not sure if "experimental" is really
necessary anyway, given that the potential users of this option is aware
that this is a new spec that has been adopted recently.
To make the option names consistent, this renames `--translate-to-eh`
(the option that only runs the translator) to `--translate-to-exnref`,
and `--experimental-new-eh` to `--emit-exnref` (the option that runs the
translator at the end of the whole pipeline), and renames the pass and
variable names in the code accordingly as well.
In case anyone is using the old option names (and also to make the
Chromium CI pass), this does not delete the old options.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we had passes --generate-stack-ir, --optimize-stack-ir, --print-stack-ir
that could be run like any other passes. After generating StackIR it was stashed on
the function and invalidated if we modified BinaryenIR. If it wasn't invalidated then
it was used during binary writing. This PR switches things so that we optionally
generate, optimize, and print StackIR only during binary writing. It also removes
all traces of StackIR from wasm.h - after this, StackIR is a feature of binary writing
(and printing) logic only.
This is almost NFC, but there are some minor noticeable differences:
1. We no longer print has StackIR in the text format when we see it is there. It
will not be there during normal printing, as it is only present during binary writing.
(but --print-stack-ir still works as before; as mentioned above it runs during writing).
2. --generate/optimize/print-stack-ir change from being passes to being flags that
control that behavior instead. As passes, their order on the commandline mattered,
while now it does not, and they only "globally" affect things during writing.
3. The C API changes slightly, as there is no need to pass it an option "optimize" to
the StackIR APIs. Whether we optimize is handled by --optimize-stack-ir which is
set like other optimization flags on the PassOptions object, so we don't need the
old option to those C APIs.
The main benefit here is simplifying the code, so we don't need to think about
StackIR in more places than just binary writing. That may also allow future
improvements to our usage of StackIR.
|
|
|
|
|
| |
This flag is intended to help users gracefully migrate to the new wat parser. It
will be removed again not too long after the new wat parser is enabled by
default in wasm-opt.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We had two JS files that could run a wasm file for fuzzing purposes:
* --emit-js-shell, which emitted a custom JS file that runs the wasm.
* scripts/fuzz_shell.js, which was a generic file that did the same.
Both of those load the wasm and then call the exports in order and print out
logging as it goes of their return values (if any), exceptions, etc. Then the
fuzzer compares that output to running the same wasm in another VM, etc. The
difference is that one was custom for the wasm file, and one was generic. Aside
from that they are similar and duplicated a bunch of code.
This PR improves things by removing 1 and using 2 in all places, that is, we
now use the generic file everywhere.
I believe we added 1 because we thought a generic file can't do all the
things we need, like know the order of exports and the types of return values,
but in practice there are ways to do those things: The exports are in fact
in the proper order (JS order of iteration is deterministic, thankfully), and
for the type we don't want to print type internals anyhow since that would
limit fuzzing --closed-world. We do need to be careful with types in JS (see
notes in the PR about the type of null) but it's not too bad. As for the types
of params, it's fine to pass in null for them all anyhow (null converts to a
number or a reference without error).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds `--experimental-new-eh` option to `wasm-opt`. The difference
between this and `--translate-to-new-eh` is, `--translate-to-new-eh`
just runs `TranslateToNewEH` pass, while `--experimental-new-eh`
attaches `TranslateToNewEH` pass at the end of the whole optimization
pipeline. So if no other passes or optimization options (`-On`) are
specified, it is equivalent to `--translate-to-new-eh`. If other
optimization passes are specified, it runs them and at the end run the
translator to ensure the new EH instructions are emitted. The reason we
are doing this this way is that the optimization pipeline as a whole
does not support the new EH instruction yet, but we would like to
provide an option to emit a reasonably OK code with the new EH
instructions.
This also means when the optimization level > 3, it will also run
the StackIR + local2stack optimization after the translation.
Not sure how to test the output of this option, given that there is not
much point in testing the default optimization passes, and it is also
not clear how to print the stack IR if the stack ir generation and
optimization runs as a part of the pipeline and not the explicit command
line options.
This is created in favor of #6267, which added the option to
`optimization-options.h`. It had a problem of running the translator
multiple times when `-On` was given multiple times in the command line,
which I learned was rather a common usage. This adds the option directly
to `wasm-opt.cpp`, which avoids the problem. With this, it is still
possible to create and optimize Stack IR unnecessarily, but that feels a
better alternative.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR changes how file paths and the command line are handled. On startup on Windows,
we process the wstring version of the command line (including the file paths) and re-encode
it to UTF8 before handing it off to the rest of the command line handling logic. This means
that all paths are stored in UTF8-encoded std::strings as they go through the program, right
up until they are used to open files. At that time, they are converted to the appropriate native
format with the new to_path function before passing to the stdlib open functions.
This has the advantage that all of the non-file-opening code can use a single type to hold paths
(which is good since std::filesystem::path has proved problematic in some cases), but has the
disadvantage that someone could add new code that forgets to convert to_path before
opening. That's somewhat mitigated by the fact that most of the code uses the ModuleIOBase
classes for opening files.
Fixes #4995
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not optimize or modify public heap types in any way. Public heap types
include the types of imported or exported functions, tables, globals, etc. This
is important to maintain the public interface of a module and ensure it can
still link interact as intended with the outside world.
Also add validation error if we find any nontrivial public types that are not
the types of imported or exported functions. This error is meant to help the
user ensure that type optimizations are not silently inhibited. In the future,
we may want to add options to silence this error or downgrade it to a warning.
This commit only updates the type updating machinery to avoid updating public
types. It does not update any optimization passes accordingly. Since we avoid
modifying public signature types already, this is not expected to break
anything, but in the future once we have function subtyping or if we make the
error optional, we may have to update some of our optimization passes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If wasm-opt or wasm-dis are given an invalid binary, after the error
message we can also print out the wasm we did manage to read. That
includes global stuff like imports and also all the functions up until
there. This can help debugging in some situations.
Only do this when --debug is passed as it can be very verbose and
in general users might not want it.
This is technically easy to do, it turns out, since we already use a
thrown exception on an error in parsing, and we fill up the wasm as
we go, so it just contains what we've read so far, and we can just
print it.
Fixes #5344
Also switch an existing test's comments to ;; from # which was
noticed here.
|
|
|
|
|
|
|
|
|
|
| |
Previously -O3 -O1 would run -O1 twice since the last flag set the global opt level
to 1, and then all invocations of the optimizer pipeline read that. This makes each
pipeline define its own opt level.
This has been a long-standing annoyance, which wasn't much noticed except that
with wasm GC there is more of a need to run the optimization pipeline more than
once. And sometimes it is nice to run different levels.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "ignore trap" logic there is not close to enough for what we'd need to
actually fuzz in a way that ignores traps, so this removes it. Atm that logic
just allows a trap to happen without causing an error (that is, when comparing
two results, one might trap and the other not, but they'd still be considered
"equal"). But due to how we optimize traps in TrapsNeverHappens mode, the
optimizer is free to assume the trap never occurs, which might remove side
effects that are noticeable later. To actually handle that, we'd need to refactor
the code to retain results per function (including the Loggings) and then to
ignore everything from the very first trapping function. That is somewhat
complicated to do, and a simpler thing is done in #4936, so we won't need
it here.
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the basic infrastructure for the full WAT parser with just enough
detail to parse basic modules that contain only imported globals. Parsing
functions correspond to elements of the grammar in the text specification and
are templatized over context types that correspond to each phase of parsing.
Errors are explicitly propagated via `Result<T>` and `MaybeResult<T>` types.
Follow-on PRs will implement additional phases of parsing and parsing for new
elements in the grammar.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The general shape of the --help output is now:
========================
wasm-foo
Does the foo operation
========================
wasm-foo opts:
--------------
--foo-bar ..
Tool opts:
----------
..
The options are now in categories, with the more specific ones - most likely to be
wanted by the user - first. I think this makes the list a lot less confusing.
In particular, in wasm-opt all the opt passes are now in their own category.
Also add a script to make it easy to update the help tests.
|
|
|
|
|
|
|
|
| |
We used to only compare return values, and in #4369 we started comparing
whether an uncaught exception was thrown. This also adds whether a trap
occurred to `ExecutionResults`. So in `--fuzz-exec`, if a program with a
trap loses the trap or vice versa, it will error out saying the result
has changed, unless either of `--ignore-implicit-traps` or
`--trans-never-happen` is set.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As suggested in
https://github.com/WebAssembly/binaryen/pull/3955#issuecomment-871016647
This applies commandline features first. If the features section is present, and
disallows some of them, then we warn. Otherwise, the features can combine
(for example, a wasm may enable feature X because it has to use it, and a user
can simply add the flag for feature Y if they want the optimizer to try to use it;
both flags will then be enabled).
This is important because in some cases we need to know the features before
parsing the wasm, in the case that the wasm does not use the features section.
In particular, non-nullable GC locals have an effect during parsing. (Typed
function references also does, but we found a way to apply its effect all the time,
that is, always use the refined type, and that happened to not break the case
where the feature is disabled - but such a workaround is not possible with
non-nullable locals.)
To make this less error-prone, add a FeatureSet input as a parameter to
WasmBinaryBuilder. That is, when building a module, we must give it the
features to use while doing so.
This will unblock #3955 . That PR will also add a test for the actual usage
of a feature during loading (the test can only be added there, after that PR
unbreaks things).
|
|
|
|
|
|
|
|
|
|
|
| |
If we run a pass that removes DWARF followed by one that could destroy it, then
there is no possible problem - there is nothing left to destroy. We can run the later
pass with no issues (and no warnings).
Also add an assertion on running a pass runner only once. That has always been
the assumption, and now that we track whether the added passes remove debug
info, we need to check it.
Fixes emscripten-core/emscripten#14161
|
|
|
|
|
|
|
|
| |
This avoids needing to add include wasm-printing if a file doesn't already have it.
To achieve that, add the std::ostream hooks in wasm.h, and also use them
when possible, removing the need for the special WasmPrinter object.
Also stop printing in "full" (print types on each line) in error messages by default. The
user can still get that, as always, using BINARYEN_PRINT_FULL=1 in the env.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the fuzzer constructed a new random valid wasm file from
scratch. The new --initial-fuzz=FILENAME option makes it start from
an existing wasm file, and then add random contents on top of that. It
also randomly modifies the existing contents, for example tweaking
a Const, replacing some nodes with other things of the same type, etc.
It also has a chance to replace a drop with a logging (as some of our
tests just drop a result, and we match the optimized output's wasm
instead of the result; by logging, the fuzzer can check things).
The goal is to find bugs by using existing hand-written testcases as
a basis. This PR uses the test suite's testcases as initial fuzz contents.
This can find issues as they often check for corner cases - they are
designed to be "interesting", which random data may be less likely
to find.
This has found several bugs already, see recent fuzz fixes. I mentioned
the first few on Twitter but past 4 I stopped counting...
https://twitter.com/kripken/status/1314323318036602880
This required various changes to the fuzzer's generation to account
for the fact that there can be existing functions and so forth before
it starts to run, so it needs to avoid collisions and so forth.
|
|
|
|
|
|
| |
This PR contains:
- Changes that enable/disable tests on Windows to allow for better local testing.
- Also changes many abort() into Fatal() when it is really just exiting on error. This is because abort() generates a dialog window on Windows which is not great in automated scripts.
- Improvements to CMake to better work with the project in IDEs (VS).
|
|
|
|
|
| |
Adds an IR profile to each function so the validator can determine
which validation rules to apply and adds a flag to have the wast
parser set the profile to Poppy for testing purposes.
|
|
|
|
|
|
| |
This moves the fuzzer de-NaN logic out into a separate pass. This is
cleaner and also better since the old way would de-NaN once, but then
the reducer could generate code with nans. The new way lets us de-NaN
while reducing.
|
|
|
| |
Since the --roundtrip pass is more general than --fuzz-binary anyways. Also reimplements `ModuleUtils::clearModule` to use the module destructor and placement new to ensure that no members are missed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for fuzzing with wabt's wasm2c that @binji wrote.
Basically we compile the wasm to C, then compile the C to a native
executable with a custom main() to wrap around it. The executable
should then print exactly the same as that wasm when run in either
the binaryen interpreter or in a JS VM with our wrapper JS for that
wasm. In other words, compiling the wasm to C is another way to
run that wasm.
The main reasons I want this are to fuzz wasm2c itself, and to
have another option for fuzzing emcc. For the latter, we do fuzz
wasm-opt quite a lot, but that doesn't fuzz the non-wasm-opt
parts of emcc. And using wasm2c for that is nice since the
starting point is always a wasm file, which means we
can use tools like wasm-reduce and so forth, which can be
integrated with this fuzzer.
This also:
Refactors the fuzzer harness a little to make it easier to
add more "VMs" to run wasms in.
Do not autoreduce when re-running a testcase, which I hit
while developing this.
|
|
|
|
|
|
|
|
|
|
|
| |
Don't print the entire module on an error. Instead, just print
the validation errors.
However, if the user passed --print, then do print it, as otherwise
nothing would get printed - the error would be before the pass
to print happens. And in general a user passing in a request
to print would expect a printed module anyhow.
fixes #2634
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optionally track the binary format code section offsets,
that is, when loading a binary, remember where each IR
node was read from. This is necessary for DWARF
debug info, as these are the offsets DWARF refers to.
(Note that eventually we may want to do something
else, like first read the DWARF and only then add
debug info annotations into the IR in a more LLVM-like
manner, but this is more straightforward and should be
enough to update debug lines and ranges).
This tracking adds noticeable overhead - every single
IR node adds an entry in a map - so avoid it unless
actually necessary. Specifically, if the user passes in
-g and there are actually DWARF sections in the
binary, and we are not about to remove those sections,
then we need it.
Print binary format code section offsets in text, when
printing with -g. This will help debug and test dwarf
support. It looks like
;; code offset: 0x7
as an annotation right before each node.
Also add support for -g in wasm-opt tests (unlike
a pass, it has just one - as a prefix).
Helps #2400
|
|
|
|
|
|
| |
This means that debugging/tracing can now be enabled and controlled
centrally without managing and passing state around the codebase.
|
|
|
|
|
|
|
|
|
| |
This allows for debug trace message to be split my channel. So you
can pass `--debug` to simply debug everything, or `--debug=opt`
to only debug wasm-opt.
This change is the initial introduction but as a followup I hope to
convert all tracing over to this new system so we can more easily
control the debug output.
|
|
|
|
|
|
|
|
| |
If wasm-opt is run with no passes, warn, as we've gotten reports that people
assume a tool called "wasm-opt" should optimize automatically (but we follow
llvm's opt convention of not doing so).
Add a --quiet (-q) flag that suppresses this minor warning, and the other minor
warning where there is no output file.
|
| |
|
|
|
|
|
| |
Use one shared location in optimization-options as much as possible.
This also allows tools like wasm2js to receive that flag.
|
|
|
|
|
|
| |
This is useful for front-ends which wish to selectively enable or
disable coloring.
Also expose these APIs from the C API.
|
|
|
|
|
| |
This is useful for wasm2js, as we don't emit traps for OOB loads etc. like wasm (like we don't trap on bad float-to-int, as it's too hard in JS, and it's undefined behavior in C anyhow). It may also help general fuzzing, as those traps may make other interesting patterns less likely.
Also add more wasm2js support in the fuzzer, which includes using this no-OOB option.
|
|
|
| |
Applies the changes in #2065, and temprarily disables the hook since it's too slow to run on a change this large. We should re-enable it in a later commit.
|
|
|
| |
Mass change to apply clang-format to everything. We are applying this in a PR by me so the (git) blame is all mine ;) but @aheejin did all the work to get clang-format set up and all the manual work to tidy up some things to make the output nicer in #2048
|
|
|
|
|
|
|
| |
Implement interpretation of remaining bulk memory ops, add bulk memory
spec tests with light modifications, fix bugs preventing the fuzzer
from running correctly with bulk memory, and fix bugs found by the
fuzzer.
|
|
|
|
|
| |
This allows us to emit a (potentially modified) target features
section and conditionally emit other sections such as the DataCount
section based on the presence of features.
|
|
|
|
|
|
|
| |
If the user does not supply features explicitly on the command line,
read and use the features in the target features section for
validation and passes. If the user does supply features explicitly,
error if they are not a superset of the features marked as used in the
target features section and the user does not explicitly handle this.
|
| |
|
|
|
|
|
|
|
|
|
| |
* make DE_NAN avoid creating nan literals in the first place
* add a reducer option `--denan` to not introduce nans in destructive reduction
* add a `Literal::isNaN()` method
* also remove the default exception logging from the fuzzer js glue, which is a source of non-useful VM differences (like nan nondeterminism)
* added an option `--no-fuzz-nans` to make it easy to avoid nans when fuzzing (without hacking the source and recompiling).
Background: trying to get fuzzing on jsc working despite this open issue: https://bugs.webkit.org/show_bug.cgi?id=175691
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A user that just does
```
wasm-opt input.wasm -O
```
may assume that the input file should have been optimized. But without `-o` we don't emit any output.
Often you may not want any output, like if you just want to run a pass like `--metrics`. But for most users wasm-opt is probably going to be used as an optimizer of files. So this PR suggests we emit a warning in that case.
For comparison, `llvm-opt` would print to the console, but it avoids printing a binary there so it issues a warning. Instead of this warning, perhaps we should do the same? That would also not be confusing.
Closes #1907
|
|
|
|
|
|
|
|
|
|
|
| |
The main fuzz_opt.py script compares JS VMs, and separately runs binaryen's fuzz-exec that compares the binaryen interpreter to itself (before and after opts). This PR lets us directly compare binaryen's interpreter output to JS VMs. This found a bunch of minor things we can do better on both sides, giving more fuzz coverage.
To enable this, a bunch of tiny fixes were needed:
* Add --fuzz-exec-before which is like --fuzz-exec but just runs the code before opts are run, instead of before and after.
* Normalize double printing (so JS and C++ print comparable things). This includes negative zero in JS, which we never printed properly til now.
* Various improvements to how we print fuzz-exec logging - remove unuseful things, and normalize the others across JS and C++.
* Properly legalize the wasm when --emit-js-wrapper (i.e., we will run the code from JS), and use that in the JS wrapper code.
|
|
|
|
| |
Add feature flags and struct interface. Default feature set has all feature enabled.
|
|
|
|
|
| |
This picks up from #1644 and indeed borrows the test case from there.
|
|
|
|
|
| |
(#1751)
partially solves #1676.
|
|
|
|
|
|
|
|
|
| |
This PR includes non-mutable globals in precompute, which will allow me to continue removing manual inlining of constants in AssemblyScript without breaking something. Related: #1621, i.e.
enum Animal {
CAT = 0,
DOG = CAT + 1 // requires that `Animal.CAT` is evaluated to
// precompute the constant value for `Animal.DOG`
}
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* support source map input in wasm-opt, refactoring the loading code into wasm-io
* use wasm-io in wasm-as
* support output source maps in wasm-opt
* add a test for wasm-opt and source maps
|