summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndrei Fajardo <92402603+nerdai@users.noreply.github.com>2025-01-04 17:07:30 -0500
committerGitHub <noreply@github.com>2025-01-04 23:07:30 +0100
commit6f8351dfda5c1e6cd7bd2d6f94580d92af19db43 (patch)
treead80102f53974c8db0bafb6386c38dd4f421eb63
parent57f41da13b10d909b85b7c335050e14fdb5b0d9b (diff)
downloadcandle-6f8351dfda5c1e6cd7bd2d6f94580d92af19db43.tar.gz
candle-6f8351dfda5c1e6cd7bd2d6f94580d92af19db43.tar.bz2
candle-6f8351dfda5c1e6cd7bd2d6f94580d92af19db43.zip
add link to README (#2701)
-rw-r--r--README.md1
1 files changed, 1 insertions, 0 deletions
diff --git a/README.md b/README.md
index 246e2844..05b12c50 100644
--- a/README.md
+++ b/README.md
@@ -189,6 +189,7 @@ And then head over to
- [`gpt-from-scratch-rs`](https://github.com/jeroenvlek/gpt-from-scratch-rs): A port of Andrej Karpathy's _Let's build GPT_ tutorial on YouTube showcasing the Candle API on a toy problem.
- [`candle-einops`](https://github.com/tomsanbear/candle-einops): A pure rust implementation of the python [einops](https://github.com/arogozhnikov/einops) library.
- [`atoma-infer`](https://github.com/atoma-network/atoma-infer): A Rust library for fast inference at scale, leveraging FlashAttention2 for efficient attention computation, PagedAttention for efficient KV-cache memory management, and multi-GPU support. It is OpenAI api compatible.
+- [`llms-from-scratch-rs`](https://github.com/nerdai/llms-from-scratch-rs): A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book.
If you have an addition to this list, please submit a pull request.