summaryrefslogtreecommitdiff
path: root/candle-book/src/guide/installation.md
blob: 0b429566d7312244a1ccd8670c44ac42d5957c62 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# Installation

- **With Cuda support**:

1. First, make sure that Cuda is correctly installed.
- `nvcc --version` should print your information about your Cuda compiler driver.
- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something
like:

```bash
compute_cap
8.9
```

If any of the above commands errors out, please make sure to update your Cuda version.

2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support

Start by creating a new cargo:

```bash
cargo new myapp
cd myapp
```

Make sure to add the `candle-core` crate with the cuda feature:

```bash
cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda"
```

Run `cargo build` to make sure everything can be correctly built.

```bash
cargo run
```

**Without Cuda support**:

Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows:

```bash
cargo new myapp
cd myapp
cargo add --git https://github.com/huggingface/candle.git candle-core
```

Finally, run `cargo run` to make sure everything can be correctly built.

```bash
cargo run
```

**With mkl support**

You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)