summaryrefslogtreecommitdiff
path: root/src/wasm/CMakeLists.txt
Commit message (Collapse)AuthorAgeFilesLines
* [NFC] Encapsulate source map reader state (#7132)Thomas Lively2024-12-031-0/+1
| | | | | | | | | | | | Move all state relevant to reading source maps out of WasmBinaryReader and into a new utility, SourceMapReader. This is a prerequisite for parallelizing the parsing of function bodies, since the source map reader state is different at the beginning of each function. Also take the opportunity to simplify the way we read source maps, for example by deferring the reading of anything but the position of a debug location until it will be used and by using `std::optional` instead of singleton `std::set`s to store function prologue and epilogue debug locations.
* Add a utility for comparing and hashing rec group shapes (#6808)Thomas Lively2024-08-071-0/+1
| | | | | | | | | | | This is very similar to the internal utilities for canonicalizing rec groups in the type system implementation, except that the new utility also supports ordered comparison of rec groups, and of course the new utility only uses the public type API. A follow-up PR will replace the internal implementation of rec group comparison and hashing in the type system with this one. Another follow-up PR will use this new utility in a type optimization.
* Remove obsolete parser code (#6607)Thomas Lively2024-05-291-1/+0
| | | | | Remove `SExpressionParser`, `SExpressionWasmBuilder`, and `cashew::Parser`. Simplify gen-s-parser.py. Remove the --new-wat-parser and --deprecated-wat-parser flags.
* [StackIR] Run StackIR during binary writing and not as a pass (#6568)Alon Zakai2024-05-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously we had passes --generate-stack-ir, --optimize-stack-ir, --print-stack-ir that could be run like any other passes. After generating StackIR it was stashed on the function and invalidated if we modified BinaryenIR. If it wasn't invalidated then it was used during binary writing. This PR switches things so that we optionally generate, optimize, and print StackIR only during binary writing. It also removes all traces of StackIR from wasm.h - after this, StackIR is a feature of binary writing (and printing) logic only. This is almost NFC, but there are some minor noticeable differences: 1. We no longer print has StackIR in the text format when we see it is there. It will not be there during normal printing, as it is only present during binary writing. (but --print-stack-ir still works as before; as mentioned above it runs during writing). 2. --generate/optimize/print-stack-ir change from being passes to being flags that control that behavior instead. As passes, their order on the commandline mattered, while now it does not, and they only "globally" affect things during writing. 3. The C API changes slightly, as there is no need to pass it an option "optimize" to the StackIR APIs. Whether we optimize is handled by --optimize-stack-ir which is set like other optimization flags on the PassOptions object, so we don't need the old option to those C APIs. The main benefit here is simplifying the code, so we don't need to think about StackIR in more places than just binary writing. That may also allow future improvements to our usage of StackIR.
* [NFC] Split the new wat parser into multiple files (#5960)Thomas Lively2023-09-191-2/+0
| | | | | | And put the new files in a new source directory, "parser". This is a rough split and is not yet expected to dramatically improve compile times. The exact organization of the new files is subject to change, but this splitting should be enough to make further parser development more pleasant.
* Factor IRBuilder utility out of the new wat parser (#5880)Thomas Lively2023-08-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add an IRBuilder utility in a new wasm-ir-builder.h header. IRBuilder is extremely similar to Builder, except that it manages building full trees of Binaryen IR from a linear sequence of instructions, whereas Builder only builds a single IR node at a time. To build full IR trees, IRBuilder maintains an internal stack of expressions, popping children off the stack and pushing the new node onto the stack whenever it builds a new node. In addition to providing makeXYZ function to allocate, initialize, and finalize new IR nodes, IRBuilder also provides a visit() method that can be used when the user has already allocated the IR nodes and only needs to reconstruct the connections between them. This will be useful in outlining both for constructing outlined functions and for reconstructing functions around arbitrary outlined holes. Besides the new wat parser and outlining, this new utility can also eventually be used in the binary parser and to convert from Poppy IR back to Binaryen IR if that ever becomes necessary. To simplify this initial change, IRBuilder exposes the same interface as the code it replaces in the wat parser. A future change requiring more extensive changes to the wat parser will simplify this interface. Also, since the new code is tested only via the new wat parser, it only supports building instructions that were already supported by the new wat parser to avoid trying to support any instructions without corresponding testing. Implementing support for the remaining instructions is left as future work.
* [Parser] Begin parsing modules (#4716)Thomas Lively2022-06-101-0/+1
| | | | | | | | | | | Implement the basic infrastructure for the full WAT parser with just enough detail to parse basic modules that contain only imported globals. Parsing functions correspond to elements of the grammar in the text specification and are templatized over context types that correspond to each phase of parsing. Errors are explicitly propagated via `Result<T>` and `MaybeResult<T>` types. Follow-on PRs will implement additional phases of parsing and parsing for new elements in the grammar.
* [Parser][NFC] Create a public wat-lexer.h header (#4695)Thomas Lively2022-05-271-0/+1
| | | | | | wat-parser-internal.h was already quite large after implementing just the lexer, so it made sense to rename it to be lexer-specific and start a new file for the higher-level parser. Also make it a proper .cpp file and split the testable interface out into wat-lexer.h.
* Remove use of std::iterator (#4199)Derek Schuff2021-10-011-1/+4
| | | It's deprecated in C++17
* Refactor code out of parsing.h NFC. (#3635)Alon Zakai2021-03-011-0/+1
| | | | Most of it goes in a new parsing.cpp. One method was only used in the s-expression's parser, and has been moved there.
* Improve testing on Windows (#3142)Wouter van Oortmerssen2020-09-171-1/+1
| | | | | | This PR contains: - Changes that enable/disable tests on Windows to allow for better local testing. - Also changes many abort() into Fatal() when it is really just exiting on error. This is because abort() generates a dialog window on Windows which is not great in automated scripts. - Improvements to CMake to better work with the project in IDEs (VS).
* Added headers to CMake files (#3037)Wouter van Oortmerssen2020-08-101-0/+2
| | | This is needed for headers to show up in IDE projects, and has no other effect on the build.
* Binary format code section offset tracking (#2515)Alon Zakai2019-12-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optionally track the binary format code section offsets, that is, when loading a binary, remember where each IR node was read from. This is necessary for DWARF debug info, as these are the offsets DWARF refers to. (Note that eventually we may want to do something else, like first read the DWARF and only then add debug info annotations into the IR in a more LLVM-like manner, but this is more straightforward and should be enough to update debug lines and ranges). This tracking adds noticeable overhead - every single IR node adds an entry in a map - so avoid it unless actually necessary. Specifically, if the user passes in -g and there are actually DWARF sections in the binary, and we are not about to remove those sections, then we need it. Print binary format code section offsets in text, when printing with -g. This will help debug and test dwarf support. It looks like ;; code offset: 0x7 as an annotation right before each node. Also add support for -g in wasm-opt tests (unlike a pass, it has just one - as a prefix). Helps #2400
* DWARF parsing and writing support using LLVM (#2520)Alon Zakai2019-12-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This imports LLVM code for DWARF handling. That code has the Apache 2 license like us. It's also the same code used to emit DWARF in the common toolchain, so it seems like a safe choice. This adds two passes: --dwarfdump which runs the same code LLVM runs for llvm-dwarfdump. This shows we can parse it ok, and will be useful for debugging. And --dwarfupdate writes out the DWARF sections (unchanged from what we read, so it just roundtrips - for updating we need #2515). This puts LLVM in thirdparty which is added here. All the LLVM code is behind USE_LLVM_DWARF, which is on by default, but off in JS for now, as it increases code size by 20%. This current approach imports the LLVM files directly. This is not how they are intended to be used, so it required a bunch of local changes - more than I expected actually, for the platform-specific stuff. For now this seems to work, so it may be good enough, but in the long term we may want to switch to linking against libllvm. A downside to doing that is that binaryen users would need to have an LLVM build, and even in the waterfall builds we'd have a problem - while we ship LLVM there anyhow, we constantly update it, which means that binaryen would need to be on latest llvm all the time too (which otherwise, given DWARF is quite stable, we might not need to constantly update). An even larger issue is that as I did this work I learned about how DWARF works in LLVM, and while the reading code is easy to reuse, the writing code is trickier. The main code path is heavily integrated with the MC layer, which we don't have - we might want to create a "fake MC layer" for that, but it sounds hard. Instead, there is the YAML path which is used mostly for testing, and which can convert DWARF to and from YAML and from binary. Using the non-YAML parts there, we can convert binary DWARF to the YAML layer's nice Info data, then convert that to binary. This works, however, this is not the path LLVM uses normally, and it supports only some basic DWARF sections - I had to add ranges support, in fact. So if we need more complex things, we may end up needing to use the MC layer approach, or consider some other DWARF library. However, hopefully that should not affect the core binaryen code which just calls a library for DWARF stuff. Helps #2400
* cmake: Convert to using lowercase for and functions/macros (#2495)Sam Clegg2019-12-041-2/+2
| | | This is line with modern cmake conventions is much less SHOUTY!
* Collect all object files from the object libraries in a CMake variable (#2477)Immanuel Haffner2019-11-261-1/+1
| | | | | | | | | using the `$<TARGET_OBJECTS:objlib>` syntax. Use this variable when adding `libbinaryen` as static or shared library. Additionally, use the variable with the object files to simplify the `TARGET_LINK_LIBRARIES` commands: add the object libraries to the sources of executables and drop the use of our libraries in `TARGET_LINK_LIBRARIES`. (Object libraries cannot be linked but must be used as sources. See https://cmake.org/pipermail/cmake/2018-June/067721.html)
* Revert "Build libbinaryen as a monolithic statically/shared library (#2463)" ↵Alon Zakai2019-11-251-1/+1
| | | | | (#2474) This reverts commit bf8f36c31c0b8e6213bce840be66937dd6d0f6af.
* Build libbinaryen as a monolithic statically/shared library (#2463)Immanuel Haffner2019-11-221-1/+1
| | | | | | | | | | | | * Transform libraries created in subdirectories from statically linked libraries to CMake object libraries. * Link object libraries as `PRIVATE` to `libbinaryen`. According to CMake documentation: "Libraries and targets following PRIVATE are linked to, but are not made part of the link interface." This is exactly what we want, as we only want the C API to be part of the interface.
* Refactor stack IR / binary writer (NFC) (#2250)Heejin Ahn2019-07-231-0/+1
| | | | | | | | | | | | | | | | Previously `StackWriter` and its subclasses had routines for all three modes (`Binaryen2Binary`, `Binaryen2Stack`, and `Stack2Binary`) within a single class. This splits routines for each in a separate class and also factors out binary writing into a separate class (`BinaryInstWriter`) so other classes can make use of it. The new classes are: - `BinaryInstWriter`: Binary instruction writer. Only responsible for emitting binary contents and no other logic - `BinaryenIRWriter`: Converts binaryen IR into something else - `BinaryenIRToBinaryWriter`: Writes binaryen IR to binary - `StackIRGenerator`: Converts binaryen IR to stack IR - `StackIRToBinaryWriter`: Writes stack IR to binary
* Function pointer cast emulation (#1468)Alon Zakai2018-03-131-0/+2
| | | | | | | | | | | This adds a pass that implements "function pointer cast emulation" - allows indirect calls to go through even if the number of arguments or their types is incorrect. That is undefined behavior in C/C++ but in practice somehow works in native archs. It is even relied upon in e.g. Python. Emscripten already has such emulation for asm.js, which also worked for asm2wasm. This implements something like it in binaryen which also allows the wasm backend to use it. As a result, Python should now be portable using the wasm backend. The mechanism used for the emulation is to make all indirect calls use a fixed number of arguments, all of type i64, and a return type of also i64. Thunks are then placed in the table which translate the arguments properly for the target, basically by reinterpreting to i64 and back. As a result, receiving an i64 when an i32 is sent will have the upper bits all zero, and the reverse would truncate the upper bits, etc. (Note that this is different than emscripten's existing emulation, which converts (as signed) to a double. That makes sense for JS where double's can contain all numeric values, but in wasm we have i64s. Also, bitwise conversion may be more like what native archs do anyhow. It is enough for Python.) Also adds validation for a function's type matching the function's actual params and result (surprised we didn't have that before, but we didn't, and there was even a place in the test suite where that was wrong). Also simplifies the build script by moving two cpp files into the wasm/ subdir, so they can be built once and shared between the various tools.
* Factor wasm validator into a cpp file (#1086)Derek Schuff2017-07-101-0/+1
| | | Also small cleanup to CMake libraries
* Wasm h to cpp (#926)jgravelle-google2017-03-101-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | * Move WasmType function implementations to wasm.cpp * Move Literal methods to wasm.cpp * Reorder wasm.cpp shared constants back to top * Move expression functions to wasm.cpp * Finish moving things to wasm.cpp * Split out Literal into its own .h/.cpp. Also factor out common wasm-type module * Remove unneeded/transitive includes from wasm.h * Add comment to try/check methods * Rename tryX/checkX methods to getXOrNull * Add missing include that should fix appveyor build breakage * More appveyor
* Read/Write Abstraction (#889)Alon Zakai2017-01-261-0/+1
| | | | | * Added ModuleReader/Writer classes that support text and binary I/O * Use them in wasm-opt and asm2wasm
* Move wasm binary reader and writer from the header file into libwasm (#797)Derek Schuff2016-10-201-0/+1
|
* Move wasm.cpp and wasm-s-parser into a library (#796)Derek Schuff2016-10-201-0/+5
Also moves the bulk of the code in wasm-s-parser into a cpp file. Allows namespace and #include cleanups, and improves j4 compile time by 20%. Should also make any future parser changes easier and more localized.