diff options
author | zachcp <zachcp@users.noreply.github.com> | 2024-11-15 02:30:15 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-11-15 08:30:15 +0100 |
commit | f689ce5d39c6f1475dfc71503288ea2905c8f685 (patch) | |
tree | 10b35ae68f1f5683edfebdcf92970de78ba05283 /candle-transformers/src/models/quantized_rwkv_v5.rs | |
parent | 0ed24b9852ccc7dfb92d555afba3d56c2a3f3224 (diff) | |
download | candle-f689ce5d39c6f1475dfc71503288ea2905c8f685.tar.gz candle-f689ce5d39c6f1475dfc71503288ea2905c8f685.tar.bz2 candle-f689ce5d39c6f1475dfc71503288ea2905c8f685.zip |
Documentation Pass for Models (#2617)
* links in chinese_clip
* links for clip model
* add mod docs for flux and llava
* module doc for MMDIT and MIMI
* add docs for a few more modesl
* mod docs for bert naser and beit
* add module docs for convmixer colpali codegeex and chatglm
* add another series of moddocs
* add fastvit-llama2_c
* module docs mamba -> mobileone
* module docs from moondream-phi3
* mod docs for quantized and qwen
* update to yi
* fix long names
* Update llama2_c.rs
* Update llama2_c_weights.rs
* Fix the link for mimi + tweaks
---------
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
Diffstat (limited to 'candle-transformers/src/models/quantized_rwkv_v5.rs')
-rw-r--r-- | candle-transformers/src/models/quantized_rwkv_v5.rs | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/candle-transformers/src/models/quantized_rwkv_v5.rs b/candle-transformers/src/models/quantized_rwkv_v5.rs index c41d7b4e..cc5204bf 100644 --- a/candle-transformers/src/models/quantized_rwkv_v5.rs +++ b/candle-transformers/src/models/quantized_rwkv_v5.rs @@ -1,3 +1,20 @@ +//! RWKV v5 model implementation with quantization support. +//! +//! RWKV v5 is an attention-free language model optimized for efficiency. +//! This implementation provides quantization for reduced memory and compute. +//! +//! Key characteristics: +//! - Linear attention mechanism +//! - GroupNorm layer normalization +//! - Time-mixing layers +//! - State-based sequential processing +//! - Support for 8-bit quantization +//! +//! References: +//! - [RWKV Model](https://github.com/BlinkDL/RWKV-LM) +//! - [RWKV v5 Architecture](https://www.rwkv.com/v5) +//! + use crate::{ quantized_nn::{layer_norm, linear_no_bias as linear, Embedding, Linear}, quantized_var_builder::VarBuilder, |