summaryrefslogtreecommitdiff
path: root/candle-transformers/src/models/quantized_llama.rs
diff options
context:
space:
mode:
authorzachcp <zachcp@users.noreply.github.com>2024-11-15 02:30:15 -0500
committerGitHub <noreply@github.com>2024-11-15 08:30:15 +0100
commitf689ce5d39c6f1475dfc71503288ea2905c8f685 (patch)
tree10b35ae68f1f5683edfebdcf92970de78ba05283 /candle-transformers/src/models/quantized_llama.rs
parent0ed24b9852ccc7dfb92d555afba3d56c2a3f3224 (diff)
downloadcandle-f689ce5d39c6f1475dfc71503288ea2905c8f685.tar.gz
candle-f689ce5d39c6f1475dfc71503288ea2905c8f685.tar.bz2
candle-f689ce5d39c6f1475dfc71503288ea2905c8f685.zip
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
Diffstat (limited to 'candle-transformers/src/models/quantized_llama.rs')
-rw-r--r--candle-transformers/src/models/quantized_llama.rs17
1 files changed, 17 insertions, 0 deletions
diff --git a/candle-transformers/src/models/quantized_llama.rs b/candle-transformers/src/models/quantized_llama.rs
index 04a50981..7efd385d 100644
--- a/candle-transformers/src/models/quantized_llama.rs
+++ b/candle-transformers/src/models/quantized_llama.rs
@@ -1,3 +1,20 @@
+//! Quantized llama model implementation.
+//!
+//! This provides a quantized implementation of the llama language model architecture.
+//! The model implements parameter efficient quantization for reduced memory usage
+//! while maintaining model quality.
+//!
+//! Key characteristics:
+//! - Transformer decoder architecture
+//! - Support for 2/3/4/8-bit quantization
+//! - Optimized memory usage through quantization
+//! - Configurable model sizes and parameter counts
+//!
+//! References:
+//! - [LLaMA Paper](https://arxiv.org/abs/2302.13971)
+//! - [LLaMA Model](https://github.com/facebookresearch/llama)
+//!
+
use std::collections::HashMap;
use crate::quantized_nn::RmsNorm;