summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorBen Smith <binjimin@gmail.com>2017-06-19 18:25:18 -0700
committerGitHub <noreply@github.com>2017-06-19 18:25:18 -0700
commitb7bc12696c813edde29f6362a321878c91ec6f11 (patch)
treede81b2fe0b7a4b0776a2cf481e789663846fa91a /README.md
parent6e27ec869dd74b2176e4eae2b2479742c8483664 (diff)
downloadwabt-b7bc12696c813edde29f6362a321878c91ec6f11.tar.gz
wabt-b7bc12696c813edde29f6362a321878c91ec6f11.tar.bz2
wabt-b7bc12696c813edde29f6362a321878c91ec6f11.zip
Move testing info into test/README.md (#513)
Also update some of the information there.
Diffstat (limited to 'README.md')
-rw-r--r--README.md162
1 files changed, 1 insertions, 161 deletions
diff --git a/README.md b/README.md
index 4f3b7108..8970937b 100644
--- a/README.md
+++ b/README.md
@@ -220,167 +220,7 @@ $ out/run-interp.py -h
## Running the test suite
-To run all the tests with default configuration:
-
-```console
-$ make test
-```
-
-Every make target has a matching `test-*` target.
-
-```console
-$ make gcc-debug-asan
-$ make test-gcc-debug-asan
-$ make clang-release
-$ make test-clang-release
-...
-```
-
-You can also run the Python test runner script directly:
-
-```console
-$ test/run-tests.py
-```
-
-To run a subset of the tests, use a glob-like syntax:
-
-```console
-$ test/run-tests.py const -v
-+ dump/const.txt (0.002s)
-+ parse/assert/bad-assertreturn-non-const.txt (0.003s)
-+ parse/expr/bad-const-i32-overflow.txt (0.002s)
-+ parse/expr/bad-const-f32-trailing.txt (0.004s)
-+ parse/expr/bad-const-i32-garbage.txt (0.005s)
-+ parse/expr/bad-const-i32-trailing.txt (0.003s)
-+ parse/expr/bad-const-i32-underflow.txt (0.003s)
-+ parse/expr/bad-const-i64-overflow.txt (0.002s)
-+ parse/expr/bad-const-i32-just-negative-sign.txt (0.004s)
-+ parse/expr/const.txt (0.002s)
-[+10|-0|%100] (0.11s)
-
-$ test/run-tests.py expr*const*i32
-+ parse/expr/bad-const-i32-just-negative-sign.txt (0.002s)
-+ parse/expr/bad-const-i32-overflow.txt (0.003s)
-+ parse/expr/bad-const-i32-underflow.txt (0.002s)
-+ parse/expr/bad-const-i32-garbage.txt (0.004s)
-+ parse/expr/bad-const-i32-trailing.txt (0.002s)
-[+5|-0|%100] (0.11s)
-```
-
-When tests are broken, they will give you the expected stdout/stderr as a diff:
-
-```console
-$ <whoops, turned addition into subtraction in the interpreter>
-$ test/run-tests.py interp/binary
-- interp/binary.txt
- STDOUT MISMATCH:
- --- expected
- +++ actual
- @@ -1,4 +1,4 @@
- -i32_add() => i32:3
- +i32_add() => i32:4294967295
- i32_sub() => i32:16
- i32_mul() => i32:21
- i32_div_s() => i32:4294967294
-
-**** FAILED ******************************************************************
-- interp/binary.txt
-[+0|-1|%100] (0.13s)
-```
-
-## Writing New Tests
-
-Tests must be placed in the test/ directory, and must have the extension
-`.txt`. The directory structure is mostly for convenience, so for example you
-can type `test/run-tests.py interp` to run all the interpreter tests.
-There's otherwise no logic attached to a test being in a given directory.
-
-That being said, try to make the test names self explanatory, and try to test
-only one thing. Also make sure that tests that are expected to fail start with
-`bad-`.
-
-The test format is straightforward:
-
-```wast
-;;; KEY1: VALUE1A VALUE1B...
-;;; KEY2: VALUE2A VALUE2B...
-(input (to)
- (the executable))
-(;; STDOUT ;;;
-expected stdout
-;;; STDOUT ;;)
-(;; STDERR ;;;
-expected stderr
-;;; STDERR ;;)
-```
-
-The test runner will copy the input to a temporary file and pass it as an
-argument to the executable (which by default is `out/wast2wasm`).
-
-The currently supported list of keys:
-
-- `TOOL`: a set of preconfigured keys, see below.
-- `EXE`: the executable to run, defaults to out/wast2wasm
-- `STDIN_FILE`: the file to use for STDIN instead of the contents of this file.
-- `FLAGS`: additional flags to pass to the executable
-- `ERROR`: the expected return value from the executable, defaults to 0
-- `SLOW`: if defined, this test's timeout is doubled.
-- `SKIP`: if defined, this test is not run. You can use the value as a comment.
-- `TODO`,`NOTE`: useful place to put additional info about the test.
-
-The currently supported list of tools:
-
-- `wast2wasm`: runs `wast2wasm`
-- `run-roundtrip`: runs the `run-roundtrip.py` script. This does a roundtrip
- conversion using `wast2wasm` and `wasm2wast`, making sure the .wasm results
- are identical.
-- `run-interp`: runs the `run-interp.py` script, running all exported functions
-- `run-interp-spec`: runs the `run-interp.py` script with the `--spec` flag
-
-When you first write a test, it's easiest if you omit the expected stdout and
-stderr. You can have the test harness fill it in for you automatically. First
-let's write our test:
-
-```sh
-$ cat > test/my-awesome-test.txt << HERE
-;;; TOOL: run-interp-spec
-(module
- (export "add2" 0)
- (func (param i32) (result i32)
- (i32.add (get_local 0) (i32.const 2))))
-(assert_return (invoke "add2" (i32.const 4)) (i32.const 6))
-(assert_return (invoke "add2" (i32.const -2)) (i32.const 0))
-HERE
-```
-
-If we run it, it will fail:
-
-```
-- my-awesome-test.txt
- STDOUT MISMATCH:
- --- expected
- +++ actual
- @@ -0,0 +1 @@
- +2/2 tests passed.
-
-**** FAILED ******************************************************************
-- my-awesome-test.txt
-[+0|-1|%100] (0.03s)
-```
-
-We can rebase it automatically with the `-r` flag. Running the test again shows
-that the expected stdout has been added:
-
-```console
-$ test/run-tests.py my-awesome-test -r
-[+1|-0|%100] (0.03s)
-$ test/run-tests.py my-awesome-test
-[+1|-0|%100] (0.03s)
-$ tail -n 3 test/my-awesome-test.txt
-(;; STDOUT ;;;
-2/2 tests passed.
-;;; STDOUT ;;)
-```
+See [test/README.md](test/README.md).
## Sanitizers