summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md2
-rw-r--r--candle-examples/examples/quantized-t5/README.md25
-rw-r--r--candle-examples/examples/t5/README.md14
3 files changed, 37 insertions, 4 deletions
diff --git a/README.md b/README.md
index 8d796d14..87b39854 100644
--- a/README.md
+++ b/README.md
@@ -173,7 +173,7 @@ If you have an addition to this list, please submit a pull request.
- Mistral 7b v0.1.
- StableLM-3B-4E1T.
- Replit-code-v1.5-3B.
- - T5.
+ - T5 and its variants: FlanT5, MADLAD400 (translation), CoEdit (Grammar correction).
- Bert.
- Whisper (multi-lingual support).
- Text to image.
diff --git a/candle-examples/examples/quantized-t5/README.md b/candle-examples/examples/quantized-t5/README.md
index 4a1ee5bf..8b8179eb 100644
--- a/candle-examples/examples/quantized-t5/README.md
+++ b/candle-examples/examples/quantized-t5/README.md
@@ -1,5 +1,7 @@
# candle-quantized-t5
+## Seq2Seq example
+
This example uses a quantized version of the t5 model.
```bash
@@ -8,6 +10,8 @@ $ cargo run --example quantized-t5 --release -- --prompt "translate to German: A
Eine schöne Kerze.
```
+## Generating Quantized weight files
+
The weight file is automatically retrieved from the hub. It is also possible to
generate quantized weight files from the original safetensors file by using the
`tensor-tools` command line utility via:
@@ -16,8 +20,11 @@ generate quantized weight files from the original safetensors file by using the
$ cargo run --example tensor-tools --release -- quantize --quantization q6k PATH/TO/T5/model.safetensors /tmp/model.gguf
```
-To use a different model, specify the `model-id`. For example, you can use
-quantized [CoEdit models](https://huggingface.co/jbochi/candle-coedit-quantized).
+## Using custom models
+
+To use a different model, specify the `model-id`.
+
+For example, for text editing, you can use quantized [CoEdit models](https://huggingface.co/jbochi/candle-coedit-quantized).
```bash
$ cargo run --example quantized-t5 --release -- \
@@ -26,6 +33,7 @@ $ cargo run --example quantized-t5 --release -- \
--temperature 0
...
Although their flight is weak, they run quickly through the tree canopy.
+```
By default, it will look for `model.gguf` and `config.json`, but you can specify
custom local or remote `weight-file` and `config-file`s:
@@ -40,3 +48,16 @@ cargo run --example quantized-t5 --release -- \
...
Note that a storm surge is what forecasters consider a hurricane's most dangerous part.
```
+
+### [MADLAD-400](https://arxiv.org/abs/2309.04662)
+
+MADLAD-400 is a series of multilingual machine translation T5 models trained on 250 billion tokens covering over 450 languages using publicly available data. These models are competitive with significantly larger models.
+
+```bash
+cargo run --example quantized-t5 --release -- \
+ --model-id "jbochi/madlad400-3b-mt" --weight-file "model-q4k.gguf" \
+ --prompt "<2de> How are you, my friend?" \
+ --temperature 0
+...
+ Wie geht es dir, mein Freund?
+```
diff --git a/candle-examples/examples/t5/README.md b/candle-examples/examples/t5/README.md
index 6a406467..d1a9186c 100644
--- a/candle-examples/examples/t5/README.md
+++ b/candle-examples/examples/t5/README.md
@@ -5,11 +5,23 @@
```bash
$ cargo run --example t5 --release -- --model-id "t5-small" --prompt "translate to German: A beautiful candle." --decode
...
-Running on CPU, to run on GPU, build this example with `--features cuda`
Eine schöne Kerze.
9 tokens generated (2.42 token/s)
```
+## Translation with [MADLAD-400](https://arxiv.org/abs/2309.04662)
+
+MADLAD-400 is a series of multilingual machine translation T5 models trained on 250 billion tokens covering over 450 languages using publicly available data. These models are competitive with significantly larger models.
+
+```bash
+cargo run --example t5 --release -- \
+ --model-id "jbochi/madlad400-3b-mt" \
+ --prompt "<2de> How are you, my friend?" \
+ --decode --temperature 0
+...
+ Wie geht es dir, mein Freund?
+```
+
## Sentence embedding example:
```bash