summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorLaurent Mazare <laurent.mazare@gmail.com>2023-09-29 12:50:50 +0200
committerGitHub <noreply@github.com>2023-09-29 11:50:50 +0100
commit49fa184a35f8a6d7ac566f92e2540e21cc01c3d9 (patch)
tree51c5c03ada6c4ab9d5cd954040d0a13516a99e6e /README.md
parent6f17ef82bed4ae0efdbbd39eed68e473513a6d5d (diff)
downloadcandle-49fa184a35f8a6d7ac566f92e2540e21cc01c3d9.tar.gz
candle-49fa184a35f8a6d7ac566f92e2540e21cc01c3d9.tar.bz2
candle-49fa184a35f8a6d7ac566f92e2540e21cc01c3d9.zip
Mistral readme (#994)
* Mistral: print the generated text. * Add mistral to the readmes.
Diffstat (limited to 'README.md')
-rw-r--r--README.md3
1 files changed, 3 insertions, 0 deletions
diff --git a/README.md b/README.md
index 718f4652..9175f73c 100644
--- a/README.md
+++ b/README.md
@@ -62,6 +62,8 @@ We also provide a some command line based examples using state of the art models
- [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM.
- [Falcon](./candle-examples/examples/falcon/): general LLM.
- [Phi-v1.5](./candle-examples/examples/phi/): a 1.3b general LLM with performance on par with LLaMA-v2 7b.
+- [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with
+ performance larger than all publicly available 13b models as of 2023-09-28.
- [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation.
- [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of
the LLaMA model using the same quantization techniques as
@@ -149,6 +151,7 @@ If you have an addition to this list, please submit a pull request.
- Falcon.
- StarCoder.
- Phi v1.5.
+ - Mistral 7b v0.1.
- T5.
- Bert.
- Whisper (multi-lingual support).