diff options
author | Nicolas Patry <patry.nicolas@protonmail.com> | 2023-08-24 16:25:52 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-08-24 16:25:52 +0200 |
commit | a87c6f7652069694a31e90d3fa1a04adb34ebb4c (patch) | |
tree | 9cad9f3869243326fc371b3262e044c8e59309b2 | |
parent | afd965f77c4b7c2b6cca837c1fd84c82f03903c2 (diff) | |
parent | 1f58bdbb1d2128ab2bef37621e218272de7ba4fe (diff) | |
download | candle-a87c6f7652069694a31e90d3fa1a04adb34ebb4c.tar.gz candle-a87c6f7652069694a31e90d3fa1a04adb34ebb4c.tar.bz2 candle-a87c6f7652069694a31e90d3fa1a04adb34ebb4c.zip |
Merge pull request #561 from patrickvonplaten/add_installation
Improve installation section and "get started"
-rw-r--r-- | README.md | 33 | ||||
-rw-r--r-- | candle-book/src/guide/installation.md | 44 |
2 files changed, 67 insertions, 10 deletions
@@ -10,14 +10,39 @@ and ease of use. Try our online demos: [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2), [yolo](https://huggingface.co/spaces/lmz/candle-yolo). +## Get started + +Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html). + +Let's see how to run a simple matrix multiplication. +Write the following to your `myapp/src/main.rs` file: ```rust -let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; -let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; +use candle_core::{Device, Tensor}; + +fn main() -> Result<(), Box<dyn std::error::Error>> { + let device = Device::Cpu; + + let a = Tensor::randn(0f32, 1., (2, 3), &device)?; + let b = Tensor::randn(0f32, 1., (3, 4), &device)?; + + let c = a.matmul(&b)?; + println!("{c}"); + Ok(()) +} +``` + +`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`. + -let c = a.matmul(&b)?; -println!("{c}"); +Having installed `candle` with Cuda support, simply define the `device` to be on GPU: + +```diff +- let device = Device::Cpu; ++ let device = Device::new_cuda(0)?; ``` +For more advanced examples, please have a look at the following section. + ## Check out our examples Check out our [examples](./candle-examples/examples/): diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index d2086e0c..394cef35 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -1,24 +1,56 @@ # Installation -Start by creating a new app: +**With Cuda support**: + +1. First, make sure that Cuda is correctly installed. +- `nvcc --version` should print information about your Cuda compiler driver. +- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something +like: + +```bash +compute_cap +8.9 +``` + +If any of the above commands errors out, please make sure to update your Cuda version. + +2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support. + +Start by creating a new cargo: ```bash cargo new myapp cd myapp -cargo add --git https://github.com/huggingface/candle.git candle-core ``` -At this point, candle will be built **without** CUDA support. -To get CUDA support use the `cuda` feature +Make sure to add the `candle-core` crate with the cuda feature: + +```bash +cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda" +``` + +Run `cargo build` to make sure everything can be correctly built. + ```bash -cargo add --git https://github.com/huggingface/candle.git candle-core --features cuda +cargo build +``` + +**Without Cuda support**: + +Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: + +```bash +cargo new myapp +cd myapp +cargo add --git https://github.com/huggingface/candle.git candle-core ``` -You can check everything works properly: +Finally, run `cargo build` to make sure everything can be correctly built. ```bash cargo build ``` +**With mkl support** You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md) |