| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
| |
That optimization uncovered some LLVM and Binaryen bugs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we export a function that just calls another function, we can export that one
instead. Then the one in the middle may be unused,
function foo() {
return bar();
}
export foo; // can be an export of bar
This saves a few bytes in rare cases, but probably more important is that it saves
the trampoline, so if this is on a hot path, we save a call.
Context: emscripten-core/emscripten#20478 (comment)
In general this is not needed as inlining helps us out by inlining foo() into the
caller (since foo is tiny, that always ends up happening). But exports are a case
the inliner cannot handle, so we do it here.
|
|
|
|
|
|
|
| |
As noted in #4739, legacy language emitting nan and infinity
exists, with the observation that it can be removed once asm.js
is no longer used and global NaN is available.
This commit removes that asm.js-specific code accordingly.
|
|
|
|
|
|
|
|
| |
The previous code was making emscripten-specific assumptions about
imports basically all coming from the `env` module.
I can't find a way to make this backwards compatible so may do a
combined roll with the emscripten-side change:
https://github.com/emscripten-core/emscripten/pull/17806
|
|
|
|
|
|
| |
This import was being injected and then used to implement trapping.
Rather than injecting an import that doesn't exist in the original
module we instead use the existing mechanism to implement this as
an internal helper.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we were assuming asmLibraryArg which is what emscripten
passes as the `env` import object but using this method is more
flexible and should allow wasm2js to work with import that are
not all form a single object.
The slight size increase here is just temporary until emscripten
gets updated.
See https://github.com/emscripten-core/emscripten/pull/17737
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable it in -O3 and -Os and higher.
This helps very little on output from LLVM, but also it does not alter
compile times much anyhow. On code that has not been run through
an optimizing compiler already, this can help quite a lot, e.g., 15% of
code size on some wasm GC samples.
This will not normally help with speed, as optimizing VMs do such
things anyhow. However, this can help baseline compilers and
interpreters and so forth.
|
|
|
|
|
|
|
|
|
| |
This is because we maybe need to reference the segments
during the start function. For example in the case of
pthreads we conditionally load passive segments during
start.
Tested in emscripten with: tests/runner.py wasm2js1
|
|
|
|
|
| |
The asmFunc now sets the outer scope's `bufferView` variable
as well as its own internal views.
|
| |
|
|
|
|
|
|
|
| |
Using addition in more places is better for gzip, and helps simplify the
optimizer as well.
Add a FinalOptimizer phase to do optimizations like our signed LEB tweaks, to
reduce binary size in the rare case when we do want a subtraction.
|
| |
|
|
|
| |
We can only pack memory if we know it is zero-filled before us.
|
|
|
|
|
|
|
|
|
|
| |
This will allow for the completely removal of
`__growWasmMemory` as a followup. We currently
unconditionally generate this function
in `generateMemoryGrowthFunction`.
See #3180
|
| |
|
|
|
|
| |
optimizeBoolean does not receive a boolean, it is done when the
output flows into a boolean context.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is usually fine to do if (x | 0) => if (x) since it just cares if the
value is 0 or not. However, if the cast turns it into 0, then that is
incorrect, which the fuzzer found as
-2147483648 + -2147483648 | 0
(the sum is 2^32, which | 0 is 0).
We can maybe look into doing this in a safe way, but for now
just remove it. It doesn't have a big impact on code size as this
is pretty rare (e.g. the minimal runtime code size test is not
broken by this).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a special helper functions for data.drop etc., as unlike most
wasm instructions these are too big to emit inline.
Track passive segments at runtime in var memorySegments
whose indexes are the segment indexes.
Emit var bufferView even if the memory exists even without
memory segments, as we do still need the view in order to
operate on it.
Also adds a few constants for atomics that will be useful in future
PRs (as this PR updates the constant lists anyhow).
|
|
|
|
|
| |
* Micro-optimize base64Decode
* Update test expectations
|
|
|
|
|
|
|
|
|
|
| |
EMSCRIPTEN_END_FUNCS markers. (#2626)
* Fix missing newline after // EMSCRIPTEN_START_FUNCS and // EMSCRIPTEN_END_FUNCS markers.
* Flake
* Update tests
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Optimize base64 decoding (about 7x-10x faster and temporary garbage-free compared to the original version)
* new Uint8Array
* Reuse Uint8Array view
* Fix end handling
* Code format
* Update tests
|
|
|
|
|
|
|
|
|
| |
When memory is packed and there are passive segments, bulk memory
operations that reference those segments by index need to be updated to
reflect the new indices and possibly split into multiple instructions
that reference multiple split segments. For some bulk-memory operations,
it is necessary to introduce new globals to explicitly track the drop
state of the original segments, but this PR is careful to only add
globals where necessary.
|
|
|
|
|
|
|
| |
We emitted the __wasm_memory_size function only when memory growth was enabled, but it can be used without that too.
In theory we could only emit it if either memory growth or memory.size is used, but I think we can expect JS minifiers to do that later.
Also fix a test suite bug - the check/auto_update script didn't run all the wasm2js tests when you run it with argument wasm2js (it used that as the list of tests, instead of the list of files, which confused me here for a while...).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Emscripten's minifier mis-minifies a couple bits in the memory
init function that's used with wasm2js when not using an external
memory init file:
https://github.com/emscripten-core/emscripten/issues/8886
Previous fix worked around the bug in one place but failed to
account for another. Have now confirmed that it works with this
change in place.
Updated test cases to match.
|
|
|
| |
We don't ever emit "use asm" anymore, so this similar annotation is not really useful, it just increases size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Workaround for wasm2js output minification issue with emscripten
When using emscripten with -O2 and --memory-init-file 0, the
JS minification breaks on this function for memory initialization
setup, causing an exception to be thrown during module setup.
Moving from two 'var' declarations for the same variable to one
should avoid hitting this with no change in functionality (the
var gets hoisted anyway).
https://github.com/emscripten-core/emscripten/issues/8886
|
| |
|
| |
|
|
|
| |
Set it to a local in the asmFunc scope, so that minifiers can easily see it's a simple local value (instead of using it as an upvar from the parameters higher up, which was how the emscripten glue was emitting it).
|
|
|
| |
This happens on e.g. an i32 load of a constant offset, then we have constant >> 2.
|
| |
|
|
|
|
| |
When loading a boolean, prefer the signed heap (which is more commonly used, and may be faster).
We never use HEAPU32 (HEAP32 is always enough), just remove it.
|
|
|
|
|
| |
We don't actually try to emit traps for loads, stores, invalid float to ints, etc., so when optimizing we may as well do so under the assumption those traps do not exist.
This lets us emit nice code for a select whose operands are loads, for example - otherwise, the values seem to have side effects.
|
|
We flatten for the i64 lowering etc. passes, and it is worth optimizing afterwards, to clean up stuff they created. That is run if the user ran wasm2js with an optimization level (like wasm2js -O3).
Split the test files to check both optimized and unoptimized code.
|