diff options
author | Laurent Mazare <laurent.mazare@gmail.com> | 2023-10-06 21:26:04 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-10-06 21:26:04 +0100 |
commit | 955e00b2e895c6e53f9348b1dd672ee4ebcf8f53 (patch) | |
tree | 72645b09fb60c7b8629c7d92ded57225e5f19a79 /README.md | |
parent | d5f7267087bc253a2fe93c95ae78a164053646c1 (diff) | |
download | candle-955e00b2e895c6e53f9348b1dd672ee4ebcf8f53.tar.gz candle-955e00b2e895c6e53f9348b1dd672ee4ebcf8f53.tar.bz2 candle-955e00b2e895c6e53f9348b1dd672ee4ebcf8f53.zip |
Add to the readmes for stable-lm. (#1047)
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 3 |
1 files changed, 3 insertions, 0 deletions
@@ -62,6 +62,8 @@ We also provide a some command line based examples using state of the art models - [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM. - [Falcon](./candle-examples/examples/falcon/): general LLM. - [Phi-v1.5](./candle-examples/examples/phi/): a 1.3b general LLM with performance on par with LLaMA-v2 7b. +- [StableLM-3B-4E1T](./candle-examples/examples/stable-lm/): a 3b general LLM + pre-trained on 1T tokens of English and code datasets. - [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with performance larger than all publicly available 13b models as of 2023-09-28. - [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation. @@ -152,6 +154,7 @@ If you have an addition to this list, please submit a pull request. - StarCoder. - Phi v1.5. - Mistral 7b v0.1. + - StableLM-3B-4E1T. - T5. - Bert. - Whisper (multi-lingual support). |