summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorLaurent Mazare <laurent.mazare@gmail.com>2023-12-14 08:02:27 -0600
committerGitHub <noreply@github.com>2023-12-14 08:02:27 -0600
commit7be982f6f77fa26fab09792130e1fd707bc728be (patch)
treee6cf6f3c4fb4a4591dd176f23c4b5bd69a632fed /README.md
parent104e196d468d6c440c9f1fc504be37b2cbfb9722 (diff)
downloadcandle-7be982f6f77fa26fab09792130e1fd707bc728be.tar.gz
candle-7be982f6f77fa26fab09792130e1fd707bc728be.tar.bz2
candle-7be982f6f77fa26fab09792130e1fd707bc728be.zip
Mention phi-2 in the readme. (#1434)
Diffstat (limited to 'README.md')
-rw-r--r--README.md8
1 files changed, 4 insertions, 4 deletions
diff --git a/README.md b/README.md
index f0c96a46..71cd6fca 100644
--- a/README.md
+++ b/README.md
@@ -54,7 +54,7 @@ These online demos run entirely in your browser:
- [whisper](https://huggingface.co/spaces/lmz/candle-whisper): speech recognition.
- [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2): text generation.
- [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm): text generation.
-- [Phi-v1.5](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm): text generation.
+- [Phi-1.5, and Phi-2](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm): text generation.
- [Segment Anything Model](https://huggingface.co/spaces/radames/candle-segment-anything-wasm): Image segmentation.
- [BLIP](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning): image captioning.
@@ -62,7 +62,7 @@ We also provide a some command line based examples using state of the art models
- [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM.
- [Falcon](./candle-examples/examples/falcon/): general LLM.
-- [Phi-v1 and Phi-v1.5](./candle-examples/examples/phi/): a 1.3b general LLM with performance on par with LLaMA-v2 7b.
+- [Phi-1, Phi-1.5, and Phi-2](./candle-examples/examples/phi/): 1.3b and 2.7b general LLMs with performance on par with LLaMA-v2 7b.
- [StableLM-3B-4E1T](./candle-examples/examples/stable-lm/): a 3b general LLM
pre-trained on 1T tokens of English and code datasets.
- [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with
@@ -122,7 +122,7 @@ There are also some wasm examples for whisper and
[whisper](https://huggingface.co/spaces/lmz/candle-whisper),
[llama2](https://huggingface.co/spaces/lmz/candle-llama2),
[T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm),
-[Phi-v1.5](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm),
+[Phi-1.5, and Phi-2](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm),
[Segment Anything Model](https://huggingface.co/spaces/radames/candle-segment-anything-wasm).
For LLaMA2, run the following command to retrieve the weight files and start a
@@ -171,7 +171,7 @@ If you have an addition to this list, please submit a pull request.
- LLaMA v1 and v2.
- Falcon.
- StarCoder.
- - Phi v1.5.
+ - Phi 1, 1.5, and 2.
- Mistral 7b v0.1.
- StableLM-3B-4E1T.
- Replit-code-v1.5-3B.