diff options
author | Laurent Mazare <laurent.mazare@gmail.com> | 2023-10-18 11:27:23 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-10-18 11:27:23 +0100 |
commit | 63c204c79e03b32351177817f78a3503a30a3624 (patch) | |
tree | 2d2c65da874006cfbe397aea7dd270bb8530c170 /README.md | |
parent | 767a6578f1b8c11cc84be35a367724b368ae7ebb (diff) | |
download | candle-63c204c79e03b32351177817f78a3503a30a3624.tar.gz candle-63c204c79e03b32351177817f78a3503a30a3624.tar.bz2 candle-63c204c79e03b32351177817f78a3503a30a3624.zip |
Add a mention to the replit-code model in the readme. (#1121)
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 2 |
1 files changed, 2 insertions, 0 deletions
@@ -67,6 +67,7 @@ We also provide a some command line based examples using state of the art models - [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with performance larger than all publicly available 13b models as of 2023-09-28. - [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation. +- [Replit-code-v1.5](./candle-examples/examples/replit-code/): a 3.3b LLM specialized for code completion. - [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of the LLaMA model using the same quantization techniques as [llama.cpp](https://github.com/ggerganov/llama.cpp). @@ -155,6 +156,7 @@ If you have an addition to this list, please submit a pull request. - Phi v1.5. - Mistral 7b v0.1. - StableLM-3B-4E1T. + - Replit-code-v1.5-3B. - T5. - Bert. - Whisper (multi-lingual support). |