diff options
author | Alon Zakai <azakai@google.com> | 2023-08-07 09:36:12 -0700 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-08-07 09:36:12 -0700 |
commit | 88762a2a6204cceb153b13238e17f9076259f4bd (patch) | |
tree | 5b65f4ebeff3cdae2271ad2d3f5611805bb3d9b0 /test/lit/passes/optimize-casts.wast | |
parent | bcdfa72afe41746ac13d27e6d02866c7d11e7fb5 (diff) | |
download | binaryen-88762a2a6204cceb153b13238e17f9076259f4bd.tar.gz binaryen-88762a2a6204cceb153b13238e17f9076259f4bd.tar.bz2 binaryen-88762a2a6204cceb153b13238e17f9076259f4bd.zip |
Fix LinearExecutionWalker on calls (#5857)
Calls were simply not handled there, so we could think we were still in the same
basic block when we were not, affecting various passes (but somehow this went
unnoticed until the TNHOracle #5850 ran on some particular Java code).
One existing test was affected, and two new tests are added: one for TNHOracle
where I detected this, and one in OptimizeCasts which is perhaps a simpler way
to see the problem.
All the cases but the TNH one, however, do not need this fix for correctness
since they actually don't care if a call would throw. As a TODO, we should find a
way to undo this minor regression. The regression only affects builds with EH
enabled, though, so most users should be unaffected even in the interm.
Diffstat (limited to 'test/lit/passes/optimize-casts.wast')
-rw-r--r-- | test/lit/passes/optimize-casts.wast | 70 |
1 files changed, 70 insertions, 0 deletions
diff --git a/test/lit/passes/optimize-casts.wast b/test/lit/passes/optimize-casts.wast index 4dd839d97..83cde8a4b 100644 --- a/test/lit/passes/optimize-casts.wast +++ b/test/lit/passes/optimize-casts.wast @@ -8,9 +8,13 @@ ;; CHECK: (type $B (sub $A (struct ))) (type $B (struct_subtype $A)) + ;; CHECK: (type $void (func)) + ;; CHECK: (type $D (array (mut i32))) (type $D (array (mut i32))) + (type $void (func)) + ;; CHECK: (global $a (mut i32) (i32.const 0)) (global $a (mut i32) (i32.const 0)) @@ -173,6 +177,65 @@ ) ) + ;; CHECK: (func $not-past-call (type $ref|struct|_=>_none) (param $x (ref struct)) + ;; CHECK-NEXT: (drop + ;; CHECK-NEXT: (ref.cast $A + ;; CHECK-NEXT: (local.get $x) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: (drop + ;; CHECK-NEXT: (call $get) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: (drop + ;; CHECK-NEXT: (local.get $x) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: ) + (func $not-past-call (param $x (ref struct)) + (drop + (ref.cast $A + (local.get $x) + ) + ) + ;; The call in the middle stops us from helping the last get, since a call + ;; might branch out. TODO we could still optimize in this case, with more + ;; precision (since if we branch out it doesn't matter what we have below). + (drop + (call $get) + ) + (drop + (local.get $x) + ) + ) + + ;; CHECK: (func $not-past-call_ref (type $ref|struct|_=>_none) (param $x (ref struct)) + ;; CHECK-NEXT: (drop + ;; CHECK-NEXT: (ref.cast $A + ;; CHECK-NEXT: (local.get $x) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: (call_ref $void + ;; CHECK-NEXT: (ref.func $void) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: (drop + ;; CHECK-NEXT: (local.get $x) + ;; CHECK-NEXT: ) + ;; CHECK-NEXT: ) + (func $not-past-call_ref (param $x (ref struct)) + (drop + (ref.cast $A + (local.get $x) + ) + ) + ;; As in the last function, the call in the middle stops us from helping the + ;; last get (this time with a call_ref). + (call_ref $void + (ref.func $void) + ) + (drop + (local.get $x) + ) + ) + ;; CHECK: (func $best (type $ref|struct|_=>_none) (param $x (ref struct)) ;; CHECK-NEXT: (local $1 (ref $A)) ;; CHECK-NEXT: (local $2 (ref $B)) @@ -1229,4 +1292,11 @@ ;; Helper for the above. (unreachable) ) + + ;; CHECK: (func $void (type $void) + ;; CHECK-NEXT: (nop) + ;; CHECK-NEXT: ) + (func $void + ;; Helper for the above. + ) ) |