summaryrefslogtreecommitdiff
path: root/candle-book
diff options
context:
space:
mode:
authorNicolas Patry <patry.nicolas@protonmail.com>2023-08-14 10:52:12 +0200
committerNicolas Patry <patry.nicolas@protonmail.com>2023-08-28 15:14:17 +0200
commit76023236677fbab10fd1c99eab95d268416fb941 (patch)
tree7d4ba218664bfb148b4d9ff8dcdd0155630b538b /candle-book
parent9137c631755ad945ebdb939d24e6c5e191b8b5c2 (diff)
downloadcandle-76023236677fbab10fd1c99eab95d268416fb941.tar.gz
candle-76023236677fbab10fd1c99eab95d268416fb941.tar.bz2
candle-76023236677fbab10fd1c99eab95d268416fb941.zip
[Book] Add small error management + start training (with generic dataset
inclusion).
Diffstat (limited to 'candle-book')
-rw-r--r--candle-book/src/SUMMARY.md10
-rw-r--r--candle-book/src/training/README.md16
2 files changed, 21 insertions, 5 deletions
diff --git a/candle-book/src/SUMMARY.md b/candle-book/src/SUMMARY.md
index 3432f66f..6eadb0c1 100644
--- a/candle-book/src/SUMMARY.md
+++ b/candle-book/src/SUMMARY.md
@@ -12,7 +12,11 @@
- [Running a model](inference/README.md)
- [Using the hub](inference/hub.md)
-- [Error management]()
+- [Error management](error_manage.md)
+- [Training](training/README.md)
+ - [MNIST]()
+ - [Fine-tuning]()
+ - [Serialization]()
- [Advanced Cuda usage]()
- [Writing a custom kernel]()
- [Porting a custom kernel]()
@@ -21,7 +25,3 @@
- [Creating a WASM app]()
- [Creating a REST api webserver]()
- [Creating a desktop Tauri app]()
-- [Training]()
- - [MNIST]()
- - [Fine-tuning]()
- - [Serialization]()
diff --git a/candle-book/src/training/README.md b/candle-book/src/training/README.md
index 8977de34..f4f9eb85 100644
--- a/candle-book/src/training/README.md
+++ b/candle-book/src/training/README.md
@@ -1 +1,17 @@
# Training
+
+
+Training starts with data. We're going to use the huggingface hub and
+start with the Hello world dataset of machine learning, MNIST.
+
+Let's start with downloading `MNIST` from [huggingface](https://huggingface.co/datasets/mnist).
+
+
+```rust
+use candle_datasets::from_hub;
+
+
+let dataset = from_hub("mnist")?;
+```
+
+This uses the standardized `parquet` files from the `refs/convert/parquet` branch on every dataset.