diff options
author | Laurent Mazare <laurent.mazare@gmail.com> | 2024-04-20 16:11:24 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-04-20 16:11:24 +0200 |
commit | 52ae33291060bb57ea2b7913179747040eed02b9 (patch) | |
tree | b9962b445b948908236cfe22eaeb4ede5358396d | |
parent | 8b390ddd290cfdcff8ef319d266b9d466838494f (diff) | |
download | candle-52ae33291060bb57ea2b7913179747040eed02b9.tar.gz candle-52ae33291060bb57ea2b7913179747040eed02b9.tar.bz2 candle-52ae33291060bb57ea2b7913179747040eed02b9.zip |
Use llama v3 by default + add to readme. (#2094)
-rw-r--r-- | README.md | 4 | ||||
-rw-r--r-- | candle-examples/examples/llama/main.rs | 2 |
2 files changed, 3 insertions, 3 deletions
@@ -60,7 +60,7 @@ These online demos run entirely in your browser: We also provide a some command line based examples using state of the art models: -- [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM, includes +- [LLaMA v1, v2, and v3](./candle-examples/examples/llama/): general LLM, includes the SOLAR-10.7B variant. - [Falcon](./candle-examples/examples/falcon/): general LLM. - [Gemma](./candle-examples/examples/gemma/): 2b and 7b general LLMs from Google Deepmind. @@ -200,7 +200,7 @@ If you have an addition to this list, please submit a pull request. - WASM support, run your models in a browser. - Included models. - Language Models. - - LLaMA v1 and v2 with variants such as SOLAR-10.7B. + - LLaMA v1, v2, and v3 with variants such as SOLAR-10.7B. - Falcon. - StarCoder, StarCoder2. - Phi 1, 1.5, and 2. diff --git a/candle-examples/examples/llama/main.rs b/candle-examples/examples/llama/main.rs index 32763153..fa30686d 100644 --- a/candle-examples/examples/llama/main.rs +++ b/candle-examples/examples/llama/main.rs @@ -85,7 +85,7 @@ struct Args { revision: Option<String>, /// The model size to use. - #[arg(long, default_value = "v2")] + #[arg(long, default_value = "v3")] which: Which, #[arg(long)] |