diff options
author | Andrei Fajardo <92402603+nerdai@users.noreply.github.com> | 2025-01-04 17:07:30 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-01-04 23:07:30 +0100 |
commit | 6f8351dfda5c1e6cd7bd2d6f94580d92af19db43 (patch) | |
tree | ad80102f53974c8db0bafb6386c38dd4f421eb63 | |
parent | 57f41da13b10d909b85b7c335050e14fdb5b0d9b (diff) | |
download | candle-6f8351dfda5c1e6cd7bd2d6f94580d92af19db43.tar.gz candle-6f8351dfda5c1e6cd7bd2d6f94580d92af19db43.tar.bz2 candle-6f8351dfda5c1e6cd7bd2d6f94580d92af19db43.zip |
add link to README (#2701)
-rw-r--r-- | README.md | 1 |
1 files changed, 1 insertions, 0 deletions
@@ -189,6 +189,7 @@ And then head over to - [`gpt-from-scratch-rs`](https://github.com/jeroenvlek/gpt-from-scratch-rs): A port of Andrej Karpathy's _Let's build GPT_ tutorial on YouTube showcasing the Candle API on a toy problem. - [`candle-einops`](https://github.com/tomsanbear/candle-einops): A pure rust implementation of the python [einops](https://github.com/arogozhnikov/einops) library. - [`atoma-infer`](https://github.com/atoma-network/atoma-infer): A Rust library for fast inference at scale, leveraging FlashAttention2 for efficient attention computation, PagedAttention for efficient KV-cache memory management, and multi-GPU support. It is OpenAI api compatible. +- [`llms-from-scratch-rs`](https://github.com/nerdai/llms-from-scratch-rs): A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book. If you have an addition to this list, please submit a pull request. |