summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md19
1 files changed, 17 insertions, 2 deletions
diff --git a/README.md b/README.md
index 912e8b43..9644b15c 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,5 @@
# candle
+[![discord server](https://dcbadge.vercel.app/api/server/hugging-face-879548962464493619)](https://discord.com/channels/879548962464493619/1136218819447238726)
[![Latest version](https://img.shields.io/crates/v/candle-core.svg)](https://crates.io/crates/candle-core)
[![Documentation](https://docs.rs/candle-core/badge.svg)](https://docs.rs/candle-core)
![License](https://img.shields.io/crates/l/candle-core.svg)
@@ -135,7 +136,9 @@ Finally, Rust is cool! A lot of the HF ecosystem already has Rust crates, like [
bring in the entire torch library into the runtime. The main contributor of `tch-rs` is also involved in the development
of `candle`.
-### Missing symbols when compiling with the mkl feature.
+### Common Errors
+
+#### Missing symbols when compiling with the mkl feature.
If you get some missing symbols when compiling binaries/tests using the mkl
features, e.g.:
@@ -154,7 +157,19 @@ can try adding the following at the top of your binary:
extern crate intel_mkl_src;
```
-### Tracking down errors
+#### Cannot run llama example : access to source requires login credentials
+
+```
+Error: request error: https://huggingface.co/meta-llama/Llama-2-7b-hf/resolve/main/tokenizer.json: status code 401
+```
+
+This is likely because you're not permissioned for the llama-v2 model. To fix
+this, you have to register on the huggingface-hub, accept the [llama-v2 model
+conditions](https://huggingface.co/meta-llama/Llama-2-7b-hf), and set up your
+authentication token. See issue
+[#350](https://github.com/huggingface/candle/issues/350) for more details.
+
+#### Tracking down errors
You can set `RUST_BACKTRACE=1` to be provided with backtraces when a candle
error is generated.