summaryrefslogtreecommitdiff
path: root/src/wasm/wasm-binary.cpp
diff options
context:
space:
mode:
authorAlon Zakai <alonzakai@gmail.com>2018-07-30 17:36:12 -0700
committerGitHub <noreply@github.com>2018-07-30 17:36:12 -0700
commit353e2c4fcb7662b8c73f5673fd0fc21bcb7287c6 (patch)
treeeda3800dcd9751f29492fc5732f4af41c0e473c8 /src/wasm/wasm-binary.cpp
parent0483d81ad9a665d404f2e2182e718ccbf0592be7 (diff)
downloadbinaryen-353e2c4fcb7662b8c73f5673fd0fc21bcb7287c6.tar.gz
binaryen-353e2c4fcb7662b8c73f5673fd0fc21bcb7287c6.tar.bz2
binaryen-353e2c4fcb7662b8c73f5673fd0fc21bcb7287c6.zip
Stack IR (#1623)
This adds a new IR, "Stack IR". This represents wasm at a very low level, as a simple stream of instructions, basically the same as wasm's binary format. This is unlike Binaryen IR which is structured and in a tree format. This gives some small wins on binary sizes, less than 1% in most cases, usually 0.25-0.50% or so. That's not much by itself, but looking forward this prepares us for multi-value, which we really need an IR like this to be able to optimize well. Also, it's possible there is more we can do already - currently there are just a few stack IR optimizations implemented, DCE local2stack - check if a set_local/get_local pair can be removed, which keeps the set's value on the stack, which if the stars align it can be popped instead of the get. Block removal - remove any blocks with no branches, as they are valid in wasm binary format. Implementation-wise, the IR is defined in wasm-stack.h. A new StackInst is defined, representing a single instruction. Most are simple reflections of Binaryen IR (an add, a load, etc.), and just pointers to them. Control flow constructs are expanded into multiple instructions, like a block turns into a block begin and end, and we may also emit extra unreachables to handle the fact Binaryen IR has unreachable blocks/ifs/loops but wasm does not. Overall, all the Binaryen IR differences with wasm vanish on the way to stack IR. Where this IR lives: Each Function now has a unique_ptr to stack IR, that is, a function may have stack IR alongside the main IR. If the stack IR is present, we write it out during binary writing; if not, we do the same binaryen IR => wasm binary process as before (this PR should not affect speed there). This design lets us use normal Passes on stack IR, in particular this PR defines 3 passes: Generate stack IR Optimize stack IR (might be worth splitting out into separate passes eventually) Print stack IR for debugging purposes Having these as normal passes is convenient as then they can run in parallel across functions and all the other conveniences of our current Pass system. However, a downside of keeping the second IR as an option on Functions, and using normal Passes to operate on it, means that we may get out of sync: if you generate stack IR, then modify binaryen IR, then the stack IR may no longer be valid (for example, maybe you removed locals or modified instructions in place etc.). To avoid that, Passes now define if they modify Binaryen IR or not; if they do, we throw away the stack IR. Miscellaneous notes: Just writing Stack IR, then writing to binary - no optimizations - is 20% slower than going directly to binary, which is one reason why we still support direct writing. This does lead to some "fun" C++ template code to make that convenient: there is a single StackWriter class, templated over the "mode", which is either Binaryen2Binary (direct writing), Binaryen2Stack, or Stack2Binary. This avoids a lot of boilerplate as the 3 modes share a lot of code in overlapping ways. Stack IR does not support source maps / debug info. We just don't use that IR if debug info is present. A tiny text format comment (if emitting non-minified text) indicates stack IR is present, if it is ((; has Stack IR ;)). This may help with debugging, just in case people forget. There is also a pass to print out the stack IR for debug purposes, as mentioned above. The sieve binaryen.js test was actually not validating all along - these new opts broke it in a more noticeable manner. Fixed. Added extra checks in pass-debug mode, to verify that if stack IR should have been thrown out, it was. This should help avoid any confusion with the IR being invalid. Added a comment about the possible future of stack IR as the main IR, depending on optimization results, following some discussion earlier today.
Diffstat (limited to 'src/wasm/wasm-binary.cpp')
-rw-r--r--src/wasm/wasm-binary.cpp797
1 files changed, 26 insertions, 771 deletions
diff --git a/src/wasm/wasm-binary.cpp b/src/wasm/wasm-binary.cpp
index 6a5775151..935660d95 100644
--- a/src/wasm/wasm-binary.cpp
+++ b/src/wasm/wasm-binary.cpp
@@ -19,7 +19,7 @@
#include "support/bits.h"
#include "wasm-binary.h"
-#include "ir/branch-utils.h"
+#include "wasm-stack.h"
#include "ir/module-utils.h"
namespace wasm {
@@ -217,7 +217,7 @@ void WasmBinaryWriter::writeFunctionSignatures() {
}
void WasmBinaryWriter::writeExpression(Expression* curr) {
- StackWriter(*this, o, debug).visit(curr);
+ ExpressionStackWriter<WasmBinaryWriter>(curr, *this, o, debug);
}
void WasmBinaryWriter::writeFunctions() {
@@ -231,22 +231,16 @@ void WasmBinaryWriter::writeFunctions() {
if (debug) std::cerr << "write one at" << o.size() << std::endl;
size_t sizePos = writeU32LEBPlaceholder();
size_t start = o.size();
- Function* function = wasm->functions[i].get();
- if (debug) std::cerr << "writing" << function->name << std::endl;
- StackWriter stackWriter(function, *this, o, sourceMap, debug);
- o << U32LEB(
- (stackWriter.numLocalsByType[i32] ? 1 : 0) +
- (stackWriter.numLocalsByType[i64] ? 1 : 0) +
- (stackWriter.numLocalsByType[f32] ? 1 : 0) +
- (stackWriter.numLocalsByType[f64] ? 1 : 0)
- );
- if (stackWriter.numLocalsByType[i32]) o << U32LEB(stackWriter.numLocalsByType[i32]) << binaryType(i32);
- if (stackWriter.numLocalsByType[i64]) o << U32LEB(stackWriter.numLocalsByType[i64]) << binaryType(i64);
- if (stackWriter.numLocalsByType[f32]) o << U32LEB(stackWriter.numLocalsByType[f32]) << binaryType(f32);
- if (stackWriter.numLocalsByType[f64]) o << U32LEB(stackWriter.numLocalsByType[f64]) << binaryType(f64);
-
- stackWriter.visitPossibleBlockContents(function->body);
- o << int8_t(BinaryConsts::End);
+ Function* func = wasm->functions[i].get();
+ if (debug) std::cerr << "writing" << func->name << std::endl;
+ // Emit Stack IR if present, and if we can
+ if (func->stackIR && !sourceMap) {
+ if (debug) std::cerr << "write Stack IR" << std::endl;
+ StackIRFunctionStackWriter<WasmBinaryWriter>(func, *this, o, debug);
+ } else {
+ if (debug) std::cerr << "write Binaryen IR" << std::endl;
+ FunctionStackWriter<WasmBinaryWriter>(func, *this, o, sourceMap, debug);
+ }
size_t size = o.size() - start;
assert(size <= std::numeric_limits<uint32_t>::max());
if (debug) std::cerr << "body size: " << size << ", writing at " << sizePos << ", next starts at " << o.size() << std::endl;
@@ -263,7 +257,7 @@ void WasmBinaryWriter::writeFunctions() {
}
}
}
- tableOfContents.functionBodies.emplace_back(function->name, sizePos + sizeFieldSize, size);
+ tableOfContents.functionBodies.emplace_back(func->name, sizePos + sizeFieldSize, size);
}
finishSection(start);
}
@@ -601,745 +595,6 @@ void WasmBinaryWriter::finishUp() {
}
}
-// StackWriter
-
-void StackWriter::mapLocals() {
- for (Index i = 0; i < func->getNumParams(); i++) {
- size_t curr = mappedLocals.size();
- mappedLocals[i] = curr;
- }
- for (auto type : func->vars) {
- numLocalsByType[type]++;
- }
- std::map<Type, size_t> currLocalsByType;
- for (Index i = func->getVarIndexBase(); i < func->getNumLocals(); i++) {
- size_t index = func->getVarIndexBase();
- Type type = func->getLocalType(i);
- currLocalsByType[type]++; // increment now for simplicity, must decrement it in returns
- if (type == i32) {
- mappedLocals[i] = index + currLocalsByType[i32] - 1;
- continue;
- }
- index += numLocalsByType[i32];
- if (type == i64) {
- mappedLocals[i] = index + currLocalsByType[i64] - 1;
- continue;
- }
- index += numLocalsByType[i64];
- if (type == f32) {
- mappedLocals[i] = index + currLocalsByType[f32] - 1;
- continue;
- }
- index += numLocalsByType[f32];
- if (type == f64) {
- mappedLocals[i] = index + currLocalsByType[f64] - 1;
- continue;
- }
- abort();
- }
-}
-
-void StackWriter::visit(Expression* curr) {
- if (sourceMap) {
- parent.writeDebugLocation(curr, func);
- }
- Visitor<StackWriter>::visit(curr);
-}
-
-static bool brokenTo(Block* block) {
- return block->name.is() && BranchUtils::BranchSeeker::hasNamed(block, block->name);
-}
-
-// emits a node, but if it is a block with no name, emit a list of its contents
-void StackWriter::visitPossibleBlockContents(Expression* curr) {
- auto* block = curr->dynCast<Block>();
- if (!block || brokenTo(block)) {
- visit(curr);
- return;
- }
- for (auto* child : block->list) {
- visit(child);
- }
- if (block->type == unreachable && block->list.back()->type != unreachable) {
- // similar to in visitBlock, here we could skip emitting the block itself,
- // but must still end the 'block' (the contents, really) with an unreachable
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitBlock(Block *curr) {
- if (debug) std::cerr << "zz node: Block" << std::endl;
- o << int8_t(BinaryConsts::Block);
- o << binaryType(curr->type != unreachable ? curr->type : none);
- breakStack.push_back(curr->name);
- Index i = 0;
- for (auto* child : curr->list) {
- if (debug) std::cerr << " " << size_t(curr) << "\n zz Block element " << i++ << std::endl;
- visit(child);
- }
- breakStack.pop_back();
- if (curr->type == unreachable) {
- // an unreachable block is one that cannot be exited. We cannot encode this directly
- // in wasm, where blocks must be none,i32,i64,f32,f64. Since the block cannot be
- // exited, we can emit an unreachable at the end, and that will always be valid,
- // and then the block is ok as a none
- o << int8_t(BinaryConsts::Unreachable);
- }
- o << int8_t(BinaryConsts::End);
- if (curr->type == unreachable) {
- // and emit an unreachable *outside* the block too, so later things can pop anything
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitIf(If *curr) {
- if (debug) std::cerr << "zz node: If" << std::endl;
- if (curr->condition->type == unreachable) {
- // this if-else is unreachable because of the condition, i.e., the condition
- // does not exit. So don't emit the if, but do consume the condition
- visit(curr->condition);
- o << int8_t(BinaryConsts::Unreachable);
- return;
- }
- visit(curr->condition);
- o << int8_t(BinaryConsts::If);
- o << binaryType(curr->type != unreachable ? curr->type : none);
- breakStack.push_back(IMPOSSIBLE_CONTINUE); // the binary format requires this; we have a block if we need one; TODO: optimize
- visitPossibleBlockContents(curr->ifTrue); // TODO: emit block contents directly, if possible
- breakStack.pop_back();
- if (curr->ifFalse) {
- o << int8_t(BinaryConsts::Else);
- breakStack.push_back(IMPOSSIBLE_CONTINUE); // TODO ditto
- visitPossibleBlockContents(curr->ifFalse);
- breakStack.pop_back();
- }
- o << int8_t(BinaryConsts::End);
- if (curr->type == unreachable) {
- // we already handled the case of the condition being unreachable. otherwise,
- // we may still be unreachable, if we are an if-else with both sides unreachable.
- // wasm does not allow this to be emitted directly, so we must do something more. we could do
- // better, but for now we emit an extra unreachable instruction after the if, so it is not consumed itself,
- assert(curr->ifFalse);
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitLoop(Loop *curr) {
- if (debug) std::cerr << "zz node: Loop" << std::endl;
- o << int8_t(BinaryConsts::Loop);
- o << binaryType(curr->type != unreachable ? curr->type : none);
- breakStack.push_back(curr->name);
- visitPossibleBlockContents(curr->body);
- breakStack.pop_back();
- o << int8_t(BinaryConsts::End);
- if (curr->type == unreachable) {
- // we emitted a loop without a return type, so it must not be consumed
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitBreak(Break *curr) {
- if (debug) std::cerr << "zz node: Break" << std::endl;
- if (curr->value) {
- visit(curr->value);
- }
- if (curr->condition) visit(curr->condition);
- o << int8_t(curr->condition ? BinaryConsts::BrIf : BinaryConsts::Br)
- << U32LEB(getBreakIndex(curr->name));
- if (curr->condition && curr->type == unreachable) {
- // a br_if is normally none or emits a value. if it is unreachable,
- // then either the condition or the value is unreachable, which is
- // extremely rare, and may require us to make the stack polymorphic
- // (if the block we branch to has a value, we may lack one as we
- // are not a reachable branch; the wasm spec on the other hand does
- // presume the br_if emits a value of the right type, even if it
- // popped unreachable)
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitSwitch(Switch *curr) {
- if (debug) std::cerr << "zz node: Switch" << std::endl;
- if (curr->value) {
- visit(curr->value);
- }
- visit(curr->condition);
- if (!BranchUtils::isBranchReachable(curr)) {
- // if the branch is not reachable, then it's dangerous to emit it, as
- // wasm type checking rules are different, especially in unreachable
- // code. so just don't emit that unreachable code.
- o << int8_t(BinaryConsts::Unreachable);
- return;
- }
- o << int8_t(BinaryConsts::TableSwitch) << U32LEB(curr->targets.size());
- for (auto target : curr->targets) {
- o << U32LEB(getBreakIndex(target));
- }
- o << U32LEB(getBreakIndex(curr->default_));
-}
-
-void StackWriter::visitCall(Call *curr) {
- if (debug) std::cerr << "zz node: Call" << std::endl;
- for (auto* operand : curr->operands) {
- visit(operand);
- }
- o << int8_t(BinaryConsts::CallFunction) << U32LEB(parent.getFunctionIndex(curr->target));
- if (curr->type == unreachable) {
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitCallImport(CallImport *curr) {
- if (debug) std::cerr << "zz node: CallImport" << std::endl;
- for (auto* operand : curr->operands) {
- visit(operand);
- }
- o << int8_t(BinaryConsts::CallFunction) << U32LEB(parent.getFunctionIndex(curr->target));
-}
-
-void StackWriter::visitCallIndirect(CallIndirect *curr) {
- if (debug) std::cerr << "zz node: CallIndirect" << std::endl;
-
- for (auto* operand : curr->operands) {
- visit(operand);
- }
- visit(curr->target);
- o << int8_t(BinaryConsts::CallIndirect)
- << U32LEB(parent.getFunctionTypeIndex(curr->fullType))
- << U32LEB(0); // Reserved flags field
- if (curr->type == unreachable) {
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitGetLocal(GetLocal *curr) {
- if (debug) std::cerr << "zz node: GetLocal " << (o.size() + 1) << std::endl;
- o << int8_t(BinaryConsts::GetLocal) << U32LEB(mappedLocals[curr->index]);
-}
-
-void StackWriter::visitSetLocal(SetLocal *curr) {
- if (debug) std::cerr << "zz node: Set|TeeLocal" << std::endl;
- visit(curr->value);
- o << int8_t(curr->isTee() ? BinaryConsts::TeeLocal : BinaryConsts::SetLocal) << U32LEB(mappedLocals[curr->index]);
- if (curr->type == unreachable) {
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitGetGlobal(GetGlobal *curr) {
- if (debug) std::cerr << "zz node: GetGlobal " << (o.size() + 1) << std::endl;
- o << int8_t(BinaryConsts::GetGlobal) << U32LEB(parent.getGlobalIndex(curr->name));
-}
-
-void StackWriter::visitSetGlobal(SetGlobal *curr) {
- if (debug) std::cerr << "zz node: SetGlobal" << std::endl;
- visit(curr->value);
- o << int8_t(BinaryConsts::SetGlobal) << U32LEB(parent.getGlobalIndex(curr->name));
-}
-
-void StackWriter::visitLoad(Load *curr) {
- if (debug) std::cerr << "zz node: Load" << std::endl;
- visit(curr->ptr);
- if (!curr->isAtomic) {
- switch (curr->type) {
- case i32: {
- switch (curr->bytes) {
- case 1: o << int8_t(curr->signed_ ? BinaryConsts::I32LoadMem8S : BinaryConsts::I32LoadMem8U); break;
- case 2: o << int8_t(curr->signed_ ? BinaryConsts::I32LoadMem16S : BinaryConsts::I32LoadMem16U); break;
- case 4: o << int8_t(BinaryConsts::I32LoadMem); break;
- default: abort();
- }
- break;
- }
- case i64: {
- switch (curr->bytes) {
- case 1: o << int8_t(curr->signed_ ? BinaryConsts::I64LoadMem8S : BinaryConsts::I64LoadMem8U); break;
- case 2: o << int8_t(curr->signed_ ? BinaryConsts::I64LoadMem16S : BinaryConsts::I64LoadMem16U); break;
- case 4: o << int8_t(curr->signed_ ? BinaryConsts::I64LoadMem32S : BinaryConsts::I64LoadMem32U); break;
- case 8: o << int8_t(BinaryConsts::I64LoadMem); break;
- default: abort();
- }
- break;
- }
- case f32: o << int8_t(BinaryConsts::F32LoadMem); break;
- case f64: o << int8_t(BinaryConsts::F64LoadMem); break;
- case unreachable: return; // the pointer is unreachable, so we are never reached; just don't emit a load
- default: WASM_UNREACHABLE();
- }
- } else {
- if (curr->type == unreachable) {
- // don't even emit it; we don't know the right type
- o << int8_t(BinaryConsts::Unreachable);
- return;
- }
- o << int8_t(BinaryConsts::AtomicPrefix);
- switch (curr->type) {
- case i32: {
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I32AtomicLoad8U); break;
- case 2: o << int8_t(BinaryConsts::I32AtomicLoad16U); break;
- case 4: o << int8_t(BinaryConsts::I32AtomicLoad); break;
- default: WASM_UNREACHABLE();
- }
- break;
- }
- case i64: {
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I64AtomicLoad8U); break;
- case 2: o << int8_t(BinaryConsts::I64AtomicLoad16U); break;
- case 4: o << int8_t(BinaryConsts::I64AtomicLoad32U); break;
- case 8: o << int8_t(BinaryConsts::I64AtomicLoad); break;
- default: WASM_UNREACHABLE();
- }
- break;
- }
- case unreachable: return;
- default: WASM_UNREACHABLE();
- }
- }
- emitMemoryAccess(curr->align, curr->bytes, curr->offset);
-}
-
-void StackWriter::visitStore(Store *curr) {
- if (debug) std::cerr << "zz node: Store" << std::endl;
- visit(curr->ptr);
- visit(curr->value);
- if (!curr->isAtomic) {
- switch (curr->valueType) {
- case i32: {
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I32StoreMem8); break;
- case 2: o << int8_t(BinaryConsts::I32StoreMem16); break;
- case 4: o << int8_t(BinaryConsts::I32StoreMem); break;
- default: abort();
- }
- break;
- }
- case i64: {
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I64StoreMem8); break;
- case 2: o << int8_t(BinaryConsts::I64StoreMem16); break;
- case 4: o << int8_t(BinaryConsts::I64StoreMem32); break;
- case 8: o << int8_t(BinaryConsts::I64StoreMem); break;
- default: abort();
- }
- break;
- }
- case f32: o << int8_t(BinaryConsts::F32StoreMem); break;
- case f64: o << int8_t(BinaryConsts::F64StoreMem); break;
- default: abort();
- }
- } else {
- if (curr->type == unreachable) {
- // don't even emit it; we don't know the right type
- o << int8_t(BinaryConsts::Unreachable);
- return;
- }
- o << int8_t(BinaryConsts::AtomicPrefix);
- switch (curr->valueType) {
- case i32: {
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I32AtomicStore8); break;
- case 2: o << int8_t(BinaryConsts::I32AtomicStore16); break;
- case 4: o << int8_t(BinaryConsts::I32AtomicStore); break;
- default: WASM_UNREACHABLE();
- }
- break;
- }
- case i64: {
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I64AtomicStore8); break;
- case 2: o << int8_t(BinaryConsts::I64AtomicStore16); break;
- case 4: o << int8_t(BinaryConsts::I64AtomicStore32); break;
- case 8: o << int8_t(BinaryConsts::I64AtomicStore); break;
- default: WASM_UNREACHABLE();
- }
- break;
- }
- default: WASM_UNREACHABLE();
- }
- }
- emitMemoryAccess(curr->align, curr->bytes, curr->offset);
-}
-
-void StackWriter::visitAtomicRMW(AtomicRMW *curr) {
- if (debug) std::cerr << "zz node: AtomicRMW" << std::endl;
- visit(curr->ptr);
- // stop if the rest isn't reachable anyhow
- if (curr->ptr->type == unreachable) return;
- visit(curr->value);
- if (curr->value->type == unreachable) return;
-
- if (curr->type == unreachable) {
- // don't even emit it; we don't know the right type
- o << int8_t(BinaryConsts::Unreachable);
- return;
- }
-
- o << int8_t(BinaryConsts::AtomicPrefix);
-
-#define CASE_FOR_OP(Op) \
- case Op: \
- switch (curr->type) { \
- case i32: \
- switch (curr->bytes) { \
- case 1: o << int8_t(BinaryConsts::I32AtomicRMW##Op##8U); break; \
- case 2: o << int8_t(BinaryConsts::I32AtomicRMW##Op##16U); break; \
- case 4: o << int8_t(BinaryConsts::I32AtomicRMW##Op); break; \
- default: WASM_UNREACHABLE(); \
- } \
- break; \
- case i64: \
- switch (curr->bytes) { \
- case 1: o << int8_t(BinaryConsts::I64AtomicRMW##Op##8U); break; \
- case 2: o << int8_t(BinaryConsts::I64AtomicRMW##Op##16U); break; \
- case 4: o << int8_t(BinaryConsts::I64AtomicRMW##Op##32U); break; \
- case 8: o << int8_t(BinaryConsts::I64AtomicRMW##Op); break; \
- default: WASM_UNREACHABLE(); \
- } \
- break; \
- default: WASM_UNREACHABLE(); \
- } \
- break
-
- switch(curr->op) {
- CASE_FOR_OP(Add);
- CASE_FOR_OP(Sub);
- CASE_FOR_OP(And);
- CASE_FOR_OP(Or);
- CASE_FOR_OP(Xor);
- CASE_FOR_OP(Xchg);
- default: WASM_UNREACHABLE();
- }
-#undef CASE_FOR_OP
-
- emitMemoryAccess(curr->bytes, curr->bytes, curr->offset);
-}
-
-void StackWriter::visitAtomicCmpxchg(AtomicCmpxchg *curr) {
- if (debug) std::cerr << "zz node: AtomicCmpxchg" << std::endl;
- visit(curr->ptr);
- // stop if the rest isn't reachable anyhow
- if (curr->ptr->type == unreachable) return;
- visit(curr->expected);
- if (curr->expected->type == unreachable) return;
- visit(curr->replacement);
- if (curr->replacement->type == unreachable) return;
-
- if (curr->type == unreachable) {
- // don't even emit it; we don't know the right type
- o << int8_t(BinaryConsts::Unreachable);
- return;
- }
-
- o << int8_t(BinaryConsts::AtomicPrefix);
- switch (curr->type) {
- case i32:
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I32AtomicCmpxchg8U); break;
- case 2: o << int8_t(BinaryConsts::I32AtomicCmpxchg16U); break;
- case 4: o << int8_t(BinaryConsts::I32AtomicCmpxchg); break;
- default: WASM_UNREACHABLE();
- }
- break;
- case i64:
- switch (curr->bytes) {
- case 1: o << int8_t(BinaryConsts::I64AtomicCmpxchg8U); break;
- case 2: o << int8_t(BinaryConsts::I64AtomicCmpxchg16U); break;
- case 4: o << int8_t(BinaryConsts::I64AtomicCmpxchg32U); break;
- case 8: o << int8_t(BinaryConsts::I64AtomicCmpxchg); break;
- default: WASM_UNREACHABLE();
- }
- break;
- default: WASM_UNREACHABLE();
- }
- emitMemoryAccess(curr->bytes, curr->bytes, curr->offset);
-}
-
-void StackWriter::visitAtomicWait(AtomicWait *curr) {
- if (debug) std::cerr << "zz node: AtomicWait" << std::endl;
- visit(curr->ptr);
- // stop if the rest isn't reachable anyhow
- if (curr->ptr->type == unreachable) return;
- visit(curr->expected);
- if (curr->expected->type == unreachable) return;
- visit(curr->timeout);
- if (curr->timeout->type == unreachable) return;
-
- o << int8_t(BinaryConsts::AtomicPrefix);
- switch (curr->expectedType) {
- case i32: {
- o << int8_t(BinaryConsts::I32AtomicWait);
- emitMemoryAccess(4, 4, 0);
- break;
- }
- case i64: {
- o << int8_t(BinaryConsts::I64AtomicWait);
- emitMemoryAccess(8, 8, 0);
- break;
- }
- default: WASM_UNREACHABLE();
- }
-}
-
-void StackWriter::visitAtomicWake(AtomicWake *curr) {
- if (debug) std::cerr << "zz node: AtomicWake" << std::endl;
- visit(curr->ptr);
- // stop if the rest isn't reachable anyhow
- if (curr->ptr->type == unreachable) return;
- visit(curr->wakeCount);
- if (curr->wakeCount->type == unreachable) return;
-
- o << int8_t(BinaryConsts::AtomicPrefix) << int8_t(BinaryConsts::AtomicWake);
- emitMemoryAccess(4, 4, 0);
-}
-
-void StackWriter::visitConst(Const *curr) {
- if (debug) std::cerr << "zz node: Const" << curr << " : " << curr->type << std::endl;
- switch (curr->type) {
- case i32: {
- o << int8_t(BinaryConsts::I32Const) << S32LEB(curr->value.geti32());
- break;
- }
- case i64: {
- o << int8_t(BinaryConsts::I64Const) << S64LEB(curr->value.geti64());
- break;
- }
- case f32: {
- o << int8_t(BinaryConsts::F32Const) << curr->value.reinterpreti32();
- break;
- }
- case f64: {
- o << int8_t(BinaryConsts::F64Const) << curr->value.reinterpreti64();
- break;
- }
- default: abort();
- }
- if (debug) std::cerr << "zz const node done.\n";
-}
-
-void StackWriter::visitUnary(Unary *curr) {
- if (debug) std::cerr << "zz node: Unary" << std::endl;
- visit(curr->value);
- switch (curr->op) {
- case ClzInt32: o << int8_t(BinaryConsts::I32Clz); break;
- case CtzInt32: o << int8_t(BinaryConsts::I32Ctz); break;
- case PopcntInt32: o << int8_t(BinaryConsts::I32Popcnt); break;
- case EqZInt32: o << int8_t(BinaryConsts::I32EqZ); break;
- case ClzInt64: o << int8_t(BinaryConsts::I64Clz); break;
- case CtzInt64: o << int8_t(BinaryConsts::I64Ctz); break;
- case PopcntInt64: o << int8_t(BinaryConsts::I64Popcnt); break;
- case EqZInt64: o << int8_t(BinaryConsts::I64EqZ); break;
- case NegFloat32: o << int8_t(BinaryConsts::F32Neg); break;
- case AbsFloat32: o << int8_t(BinaryConsts::F32Abs); break;
- case CeilFloat32: o << int8_t(BinaryConsts::F32Ceil); break;
- case FloorFloat32: o << int8_t(BinaryConsts::F32Floor); break;
- case TruncFloat32: o << int8_t(BinaryConsts::F32Trunc); break;
- case NearestFloat32: o << int8_t(BinaryConsts::F32NearestInt); break;
- case SqrtFloat32: o << int8_t(BinaryConsts::F32Sqrt); break;
- case NegFloat64: o << int8_t(BinaryConsts::F64Neg); break;
- case AbsFloat64: o << int8_t(BinaryConsts::F64Abs); break;
- case CeilFloat64: o << int8_t(BinaryConsts::F64Ceil); break;
- case FloorFloat64: o << int8_t(BinaryConsts::F64Floor); break;
- case TruncFloat64: o << int8_t(BinaryConsts::F64Trunc); break;
- case NearestFloat64: o << int8_t(BinaryConsts::F64NearestInt); break;
- case SqrtFloat64: o << int8_t(BinaryConsts::F64Sqrt); break;
- case ExtendSInt32: o << int8_t(BinaryConsts::I64STruncI32); break;
- case ExtendUInt32: o << int8_t(BinaryConsts::I64UTruncI32); break;
- case WrapInt64: o << int8_t(BinaryConsts::I32ConvertI64); break;
- case TruncUFloat32ToInt32: o << int8_t(BinaryConsts::I32UTruncF32); break;
- case TruncUFloat32ToInt64: o << int8_t(BinaryConsts::I64UTruncF32); break;
- case TruncSFloat32ToInt32: o << int8_t(BinaryConsts::I32STruncF32); break;
- case TruncSFloat32ToInt64: o << int8_t(BinaryConsts::I64STruncF32); break;
- case TruncUFloat64ToInt32: o << int8_t(BinaryConsts::I32UTruncF64); break;
- case TruncUFloat64ToInt64: o << int8_t(BinaryConsts::I64UTruncF64); break;
- case TruncSFloat64ToInt32: o << int8_t(BinaryConsts::I32STruncF64); break;
- case TruncSFloat64ToInt64: o << int8_t(BinaryConsts::I64STruncF64); break;
- case ConvertUInt32ToFloat32: o << int8_t(BinaryConsts::F32UConvertI32); break;
- case ConvertUInt32ToFloat64: o << int8_t(BinaryConsts::F64UConvertI32); break;
- case ConvertSInt32ToFloat32: o << int8_t(BinaryConsts::F32SConvertI32); break;
- case ConvertSInt32ToFloat64: o << int8_t(BinaryConsts::F64SConvertI32); break;
- case ConvertUInt64ToFloat32: o << int8_t(BinaryConsts::F32UConvertI64); break;
- case ConvertUInt64ToFloat64: o << int8_t(BinaryConsts::F64UConvertI64); break;
- case ConvertSInt64ToFloat32: o << int8_t(BinaryConsts::F32SConvertI64); break;
- case ConvertSInt64ToFloat64: o << int8_t(BinaryConsts::F64SConvertI64); break;
- case DemoteFloat64: o << int8_t(BinaryConsts::F32ConvertF64); break;
- case PromoteFloat32: o << int8_t(BinaryConsts::F64ConvertF32); break;
- case ReinterpretFloat32: o << int8_t(BinaryConsts::I32ReinterpretF32); break;
- case ReinterpretFloat64: o << int8_t(BinaryConsts::I64ReinterpretF64); break;
- case ReinterpretInt32: o << int8_t(BinaryConsts::F32ReinterpretI32); break;
- case ReinterpretInt64: o << int8_t(BinaryConsts::F64ReinterpretI64); break;
- case ExtendS8Int32: o << int8_t(BinaryConsts::I32ExtendS8); break;
- case ExtendS16Int32: o << int8_t(BinaryConsts::I32ExtendS16); break;
- case ExtendS8Int64: o << int8_t(BinaryConsts::I64ExtendS8); break;
- case ExtendS16Int64: o << int8_t(BinaryConsts::I64ExtendS16); break;
- case ExtendS32Int64: o << int8_t(BinaryConsts::I64ExtendS32); break;
- default: abort();
- }
- if (curr->type == unreachable) {
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitBinary(Binary *curr) {
- if (debug) std::cerr << "zz node: Binary" << std::endl;
- visit(curr->left);
- visit(curr->right);
-
- switch (curr->op) {
- case AddInt32: o << int8_t(BinaryConsts::I32Add); break;
- case SubInt32: o << int8_t(BinaryConsts::I32Sub); break;
- case MulInt32: o << int8_t(BinaryConsts::I32Mul); break;
- case DivSInt32: o << int8_t(BinaryConsts::I32DivS); break;
- case DivUInt32: o << int8_t(BinaryConsts::I32DivU); break;
- case RemSInt32: o << int8_t(BinaryConsts::I32RemS); break;
- case RemUInt32: o << int8_t(BinaryConsts::I32RemU); break;
- case AndInt32: o << int8_t(BinaryConsts::I32And); break;
- case OrInt32: o << int8_t(BinaryConsts::I32Or); break;
- case XorInt32: o << int8_t(BinaryConsts::I32Xor); break;
- case ShlInt32: o << int8_t(BinaryConsts::I32Shl); break;
- case ShrUInt32: o << int8_t(BinaryConsts::I32ShrU); break;
- case ShrSInt32: o << int8_t(BinaryConsts::I32ShrS); break;
- case RotLInt32: o << int8_t(BinaryConsts::I32RotL); break;
- case RotRInt32: o << int8_t(BinaryConsts::I32RotR); break;
- case EqInt32: o << int8_t(BinaryConsts::I32Eq); break;
- case NeInt32: o << int8_t(BinaryConsts::I32Ne); break;
- case LtSInt32: o << int8_t(BinaryConsts::I32LtS); break;
- case LtUInt32: o << int8_t(BinaryConsts::I32LtU); break;
- case LeSInt32: o << int8_t(BinaryConsts::I32LeS); break;
- case LeUInt32: o << int8_t(BinaryConsts::I32LeU); break;
- case GtSInt32: o << int8_t(BinaryConsts::I32GtS); break;
- case GtUInt32: o << int8_t(BinaryConsts::I32GtU); break;
- case GeSInt32: o << int8_t(BinaryConsts::I32GeS); break;
- case GeUInt32: o << int8_t(BinaryConsts::I32GeU); break;
-
- case AddInt64: o << int8_t(BinaryConsts::I64Add); break;
- case SubInt64: o << int8_t(BinaryConsts::I64Sub); break;
- case MulInt64: o << int8_t(BinaryConsts::I64Mul); break;
- case DivSInt64: o << int8_t(BinaryConsts::I64DivS); break;
- case DivUInt64: o << int8_t(BinaryConsts::I64DivU); break;
- case RemSInt64: o << int8_t(BinaryConsts::I64RemS); break;
- case RemUInt64: o << int8_t(BinaryConsts::I64RemU); break;
- case AndInt64: o << int8_t(BinaryConsts::I64And); break;
- case OrInt64: o << int8_t(BinaryConsts::I64Or); break;
- case XorInt64: o << int8_t(BinaryConsts::I64Xor); break;
- case ShlInt64: o << int8_t(BinaryConsts::I64Shl); break;
- case ShrUInt64: o << int8_t(BinaryConsts::I64ShrU); break;
- case ShrSInt64: o << int8_t(BinaryConsts::I64ShrS); break;
- case RotLInt64: o << int8_t(BinaryConsts::I64RotL); break;
- case RotRInt64: o << int8_t(BinaryConsts::I64RotR); break;
- case EqInt64: o << int8_t(BinaryConsts::I64Eq); break;
- case NeInt64: o << int8_t(BinaryConsts::I64Ne); break;
- case LtSInt64: o << int8_t(BinaryConsts::I64LtS); break;
- case LtUInt64: o << int8_t(BinaryConsts::I64LtU); break;
- case LeSInt64: o << int8_t(BinaryConsts::I64LeS); break;
- case LeUInt64: o << int8_t(BinaryConsts::I64LeU); break;
- case GtSInt64: o << int8_t(BinaryConsts::I64GtS); break;
- case GtUInt64: o << int8_t(BinaryConsts::I64GtU); break;
- case GeSInt64: o << int8_t(BinaryConsts::I64GeS); break;
- case GeUInt64: o << int8_t(BinaryConsts::I64GeU); break;
-
- case AddFloat32: o << int8_t(BinaryConsts::F32Add); break;
- case SubFloat32: o << int8_t(BinaryConsts::F32Sub); break;
- case MulFloat32: o << int8_t(BinaryConsts::F32Mul); break;
- case DivFloat32: o << int8_t(BinaryConsts::F32Div); break;
- case CopySignFloat32: o << int8_t(BinaryConsts::F32CopySign);break;
- case MinFloat32: o << int8_t(BinaryConsts::F32Min); break;
- case MaxFloat32: o << int8_t(BinaryConsts::F32Max); break;
- case EqFloat32: o << int8_t(BinaryConsts::F32Eq); break;
- case NeFloat32: o << int8_t(BinaryConsts::F32Ne); break;
- case LtFloat32: o << int8_t(BinaryConsts::F32Lt); break;
- case LeFloat32: o << int8_t(BinaryConsts::F32Le); break;
- case GtFloat32: o << int8_t(BinaryConsts::F32Gt); break;
- case GeFloat32: o << int8_t(BinaryConsts::F32Ge); break;
-
- case AddFloat64: o << int8_t(BinaryConsts::F64Add); break;
- case SubFloat64: o << int8_t(BinaryConsts::F64Sub); break;
- case MulFloat64: o << int8_t(BinaryConsts::F64Mul); break;
- case DivFloat64: o << int8_t(BinaryConsts::F64Div); break;
- case CopySignFloat64: o << int8_t(BinaryConsts::F64CopySign);break;
- case MinFloat64: o << int8_t(BinaryConsts::F64Min); break;
- case MaxFloat64: o << int8_t(BinaryConsts::F64Max); break;
- case EqFloat64: o << int8_t(BinaryConsts::F64Eq); break;
- case NeFloat64: o << int8_t(BinaryConsts::F64Ne); break;
- case LtFloat64: o << int8_t(BinaryConsts::F64Lt); break;
- case LeFloat64: o << int8_t(BinaryConsts::F64Le); break;
- case GtFloat64: o << int8_t(BinaryConsts::F64Gt); break;
- case GeFloat64: o << int8_t(BinaryConsts::F64Ge); break;
- default: abort();
- }
- if (curr->type == unreachable) {
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitSelect(Select *curr) {
- if (debug) std::cerr << "zz node: Select" << std::endl;
- visit(curr->ifTrue);
- visit(curr->ifFalse);
- visit(curr->condition);
- o << int8_t(BinaryConsts::Select);
- if (curr->type == unreachable) {
- o << int8_t(BinaryConsts::Unreachable);
- }
-}
-
-void StackWriter::visitReturn(Return *curr) {
- if (debug) std::cerr << "zz node: Return" << std::endl;
- if (curr->value) {
- visit(curr->value);
- }
- o << int8_t(BinaryConsts::Return);
-}
-
-void StackWriter::visitHost(Host *curr) {
- if (debug) std::cerr << "zz node: Host" << std::endl;
- switch (curr->op) {
- case CurrentMemory: {
- o << int8_t(BinaryConsts::CurrentMemory);
- break;
- }
- case GrowMemory: {
- visit(curr->operands[0]);
- o << int8_t(BinaryConsts::GrowMemory);
- break;
- }
- default: abort();
- }
- o << U32LEB(0); // Reserved flags field
-}
-
-void StackWriter::visitNop(Nop *curr) {
- if (debug) std::cerr << "zz node: Nop" << std::endl;
- o << int8_t(BinaryConsts::Nop);
-}
-
-void StackWriter::visitUnreachable(Unreachable *curr) {
- if (debug) std::cerr << "zz node: Unreachable" << std::endl;
- o << int8_t(BinaryConsts::Unreachable);
-}
-
-void StackWriter::visitDrop(Drop *curr) {
- if (debug) std::cerr << "zz node: Drop" << std::endl;
- visit(curr->value);
- o << int8_t(BinaryConsts::Drop);
-}
-
-int32_t StackWriter::getBreakIndex(Name name) { // -1 if not found
- for (int i = breakStack.size() - 1; i >= 0; i--) {
- if (breakStack[i] == name) {
- return breakStack.size() - 1 - i;
- }
- }
- std::cerr << "bad break: " << name << " in " << func->name << std::endl;
- abort();
-}
-
-void StackWriter::emitMemoryAccess(size_t alignment, size_t bytes, uint32_t offset) {
- o << U32LEB(Log2(alignment ? alignment : bytes));
- o << U32LEB(offset);
-}
-
// reader
void WasmBinaryBuilder::read() {
@@ -2408,7 +1663,7 @@ void WasmBinaryBuilder::pushBlockElements(Block* curr, size_t start, size_t end)
}
}
-void WasmBinaryBuilder::visitBlock(Block *curr) {
+void WasmBinaryBuilder::visitBlock(Block* curr) {
if (debug) std::cerr << "zz node: Block" << std::endl;
// special-case Block and de-recurse nested blocks in their first position, as that is
// a common pattern that can be very highly nested.
@@ -2475,7 +1730,7 @@ Expression* WasmBinaryBuilder::getBlockOrSingleton(Type type) {
return block;
}
-void WasmBinaryBuilder::visitIf(If *curr) {
+void WasmBinaryBuilder::visitIf(If* curr) {
if (debug) std::cerr << "zz node: If" << std::endl;
curr->type = getType();
curr->condition = popNonVoidExpression();
@@ -2489,7 +1744,7 @@ void WasmBinaryBuilder::visitIf(If *curr) {
}
}
-void WasmBinaryBuilder::visitLoop(Loop *curr) {
+void WasmBinaryBuilder::visitLoop(Loop* curr) {
if (debug) std::cerr << "zz node: Loop" << std::endl;
curr->type = getType();
curr->name = getNextLabel();
@@ -2546,7 +1801,7 @@ void WasmBinaryBuilder::visitBreak(Break *curr, uint8_t code) {
curr->finalize();
}
-void WasmBinaryBuilder::visitSwitch(Switch *curr) {
+void WasmBinaryBuilder::visitSwitch(Switch* curr) {
if (debug) std::cerr << "zz node: Switch" << std::endl;
curr->condition = popNonVoidExpression();
auto numTargets = getU32LEB();
@@ -2592,7 +1847,7 @@ Expression* WasmBinaryBuilder::visitCall() {
return ret;
}
-void WasmBinaryBuilder::visitCallIndirect(CallIndirect *curr) {
+void WasmBinaryBuilder::visitCallIndirect(CallIndirect* curr) {
if (debug) std::cerr << "zz node: CallIndirect" << std::endl;
auto index = getU32LEB();
if (index >= wasm.functionTypes.size()) {
@@ -2612,7 +1867,7 @@ void WasmBinaryBuilder::visitCallIndirect(CallIndirect *curr) {
curr->finalize();
}
-void WasmBinaryBuilder::visitGetLocal(GetLocal *curr) {
+void WasmBinaryBuilder::visitGetLocal(GetLocal* curr) {
if (debug) std::cerr << "zz node: GetLocal " << pos << std::endl;
requireFunctionContext("get_local");
curr->index = getU32LEB();
@@ -2636,7 +1891,7 @@ void WasmBinaryBuilder::visitSetLocal(SetLocal *curr, uint8_t code) {
curr->finalize();
}
-void WasmBinaryBuilder::visitGetGlobal(GetGlobal *curr) {
+void WasmBinaryBuilder::visitGetGlobal(GetGlobal* curr) {
if (debug) std::cerr << "zz node: GetGlobal " << pos << std::endl;
auto index = getU32LEB();
curr->name = getGlobalName(index);
@@ -2653,7 +1908,7 @@ void WasmBinaryBuilder::visitGetGlobal(GetGlobal *curr) {
throwError("bad get_global");
}
-void WasmBinaryBuilder::visitSetGlobal(SetGlobal *curr) {
+void WasmBinaryBuilder::visitSetGlobal(SetGlobal* curr) {
if (debug) std::cerr << "zz node: SetGlobal" << std::endl;
auto index = getU32LEB();
curr->name = getGlobalName(index);
@@ -3012,7 +2267,7 @@ bool WasmBinaryBuilder::maybeVisitBinary(Expression*& out, uint8_t code) {
#undef FLOAT_TYPED_CODE
}
-void WasmBinaryBuilder::visitSelect(Select *curr) {
+void WasmBinaryBuilder::visitSelect(Select* curr) {
if (debug) std::cerr << "zz node: Select" << std::endl;
curr->condition = popNonVoidExpression();
curr->ifFalse = popNonVoidExpression();
@@ -3020,7 +2275,7 @@ void WasmBinaryBuilder::visitSelect(Select *curr) {
curr->finalize();
}
-void WasmBinaryBuilder::visitReturn(Return *curr) {
+void WasmBinaryBuilder::visitReturn(Return* curr) {
if (debug) std::cerr << "zz node: Return" << std::endl;
requireFunctionContext("return");
if (currFunction->result != none) {
@@ -3055,15 +2310,15 @@ bool WasmBinaryBuilder::maybeVisitHost(Expression*& out, uint8_t code) {
return true;
}
-void WasmBinaryBuilder::visitNop(Nop *curr) {
+void WasmBinaryBuilder::visitNop(Nop* curr) {
if (debug) std::cerr << "zz node: Nop" << std::endl;
}
-void WasmBinaryBuilder::visitUnreachable(Unreachable *curr) {
+void WasmBinaryBuilder::visitUnreachable(Unreachable* curr) {
if (debug) std::cerr << "zz node: Unreachable" << std::endl;
}
-void WasmBinaryBuilder::visitDrop(Drop *curr) {
+void WasmBinaryBuilder::visitDrop(Drop* curr) {
if (debug) std::cerr << "zz node: Drop" << std::endl;
curr->value = popNonVoidExpression();
curr->finalize();