diff options
author | zachcp <zachcp@users.noreply.github.com> | 2024-11-18 08:19:23 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-11-18 14:19:23 +0100 |
commit | 386fd8abb4be23c125e8100fed932f17d356a160 (patch) | |
tree | d4964322db768d31e2e4c1949848315fcfd7cfa2 /candle-transformers/src/models/chinese_clip/vision_model.rs | |
parent | 12d7e7b1450f0c3f87c3cce3a2a1dd1674cb8fd7 (diff) | |
download | candle-386fd8abb4be23c125e8100fed932f17d356a160.tar.gz candle-386fd8abb4be23c125e8100fed932f17d356a160.tar.bz2 candle-386fd8abb4be23c125e8100fed932f17d356a160.zip |
Module Docs (#2624)
* update whisper
* update llama2c
* update t5
* update phi and t5
* add a blip model
* qlamma doc
* add two new docs
* add docs and emoji
* additional models
* openclip
* pixtral
* edits on the model docs
* update yu
* update a fe wmore models
* add persimmon
* add model-level doc
* names
* update module doc
* links in heira
* remove empty URL
* update more hyperlinks
* updated hyperlinks
* more links
* Update mod.rs
---------
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
Diffstat (limited to 'candle-transformers/src/models/chinese_clip/vision_model.rs')
-rw-r--r-- | candle-transformers/src/models/chinese_clip/vision_model.rs | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/candle-transformers/src/models/chinese_clip/vision_model.rs b/candle-transformers/src/models/chinese_clip/vision_model.rs index 2d345e0f..a20535c4 100644 --- a/candle-transformers/src/models/chinese_clip/vision_model.rs +++ b/candle-transformers/src/models/chinese_clip/vision_model.rs @@ -3,8 +3,8 @@ //! Chinese contrastive Language-Image Pre-Training (CLIP) is an architecture trained on //! pairs of images with related texts. //! -//! https://github.com/OFA-Sys/Chinese-CLIP -//! https://github.com/huggingface/transformers/blob/5af7d41e49bbfc8319f462eb45253dcb3863dfb7/src/transformers/models/chinese_clip/modeling_chinese_clip.py +//! - 💻 [Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP) +//! - 💻 [GH](https://github.com/huggingface/transformers/blob/5af7d41e49bbfc8319f462eb45253dcb3863dfb7/src/transformers/models/chinese_clip/modeling_chinese_clip.py_ use candle::{DType, IndexOp, Module, Result, Shape, Tensor, D}; use candle_nn as nn; @@ -49,7 +49,7 @@ impl Default for ChineseClipVisionConfig { } impl ChineseClipVisionConfig { - /// referer: https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16/blob/main/config.json + /// [referer](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16/blob/main/config.json) pub fn clip_vit_base_patch16() -> Self { Self { hidden_size: 768, |