diff options
author | Laurent Mazare <laurent.mazare@gmail.com> | 2023-11-11 12:39:11 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-11-11 12:39:11 +0100 |
commit | f1e678b39cf39ec0719a8122865a23b0a0d7f60c (patch) | |
tree | 8cb01c4eb0fb3ef80b2d3767e09ff7ea72de2862 | |
parent | a007f8fdb4c6d642576913e52da65fa6dc741a2b (diff) | |
download | candle-f1e678b39cf39ec0719a8122865a23b0a0d7f60c.tar.gz candle-f1e678b39cf39ec0719a8122865a23b0a0d7f60c.tar.bz2 candle-f1e678b39cf39ec0719a8122865a23b0a0d7f60c.zip |
Mention the Yi-6b/Yi-34b models in the readme. (#1321)
-rw-r--r-- | README.md | 3 |
1 files changed, 3 insertions, 0 deletions
@@ -69,6 +69,8 @@ We also provide a some command line based examples using state of the art models performance larger than all publicly available 13b models as of 2023-09-28. - [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation. - [Replit-code-v1.5](./candle-examples/examples/replit-code/): a 3.3b LLM specialized for code completion. +- [Yi-6B / Yi-34B](./candle-examples/examples/yi/): two bilingual + (English/Chinese) general LLMs with 6b and 34b parameters. - [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of the LLaMA model using the same quantization techniques as [llama.cpp](https://github.com/ggerganov/llama.cpp). @@ -174,6 +176,7 @@ If you have an addition to this list, please submit a pull request. - StableLM-3B-4E1T. - Replit-code-v1.5-3B. - Bert. + - Yi-6B and Yi-34B. - Text to text. - T5 and its variants: FlanT5, UL2, MADLAD400 (translation), CoEdit (Grammar correction). - Marian MT (Machine Translation). |