summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md33
-rw-r--r--candle-book/src/guide/installation.md44
2 files changed, 67 insertions, 10 deletions
diff --git a/README.md b/README.md
index 7a1154a4..b510979a 100644
--- a/README.md
+++ b/README.md
@@ -10,14 +10,39 @@ and ease of use. Try our online demos:
[LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2),
[yolo](https://huggingface.co/spaces/lmz/candle-yolo).
+## Get started
+
+Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html).
+
+Let's see how to run a simple matrix multiplication.
+Write the following to your `myapp/src/main.rs` file:
```rust
-let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?;
-let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?;
+use candle_core::{Device, Tensor};
+
+fn main() -> Result<(), Box<dyn std::error::Error>> {
+ let device = Device::Cpu;
+
+ let a = Tensor::randn(0f32, 1., (2, 3), &device)?;
+ let b = Tensor::randn(0f32, 1., (3, 4), &device)?;
+
+ let c = a.matmul(&b)?;
+ println!("{c}");
+ Ok(())
+}
+```
+
+`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`.
+
-let c = a.matmul(&b)?;
-println!("{c}");
+Having installed `candle` with Cuda support, simply define the `device` to be on GPU:
+
+```diff
+- let device = Device::Cpu;
++ let device = Device::new_cuda(0)?;
```
+For more advanced examples, please have a look at the following section.
+
## Check out our examples
Check out our [examples](./candle-examples/examples/):
diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md
index d2086e0c..394cef35 100644
--- a/candle-book/src/guide/installation.md
+++ b/candle-book/src/guide/installation.md
@@ -1,24 +1,56 @@
# Installation
-Start by creating a new app:
+**With Cuda support**:
+
+1. First, make sure that Cuda is correctly installed.
+- `nvcc --version` should print information about your Cuda compiler driver.
+- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something
+like:
+
+```bash
+compute_cap
+8.9
+```
+
+If any of the above commands errors out, please make sure to update your Cuda version.
+
+2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support.
+
+Start by creating a new cargo:
```bash
cargo new myapp
cd myapp
-cargo add --git https://github.com/huggingface/candle.git candle-core
```
-At this point, candle will be built **without** CUDA support.
-To get CUDA support use the `cuda` feature
+Make sure to add the `candle-core` crate with the cuda feature:
+
+```bash
+cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda"
+```
+
+Run `cargo build` to make sure everything can be correctly built.
+
```bash
-cargo add --git https://github.com/huggingface/candle.git candle-core --features cuda
+cargo build
+```
+
+**Without Cuda support**:
+
+Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows:
+
+```bash
+cargo new myapp
+cd myapp
+cargo add --git https://github.com/huggingface/candle.git candle-core
```
-You can check everything works properly:
+Finally, run `cargo build` to make sure everything can be correctly built.
```bash
cargo build
```
+**With mkl support**
You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)