summaryrefslogtreecommitdiff
path: root/candle-transformers/src/models/llava/mod.rs
diff options
context:
space:
mode:
authorLaurent Mazare <laurent.mazare@gmail.com>2024-09-30 19:31:14 +0200
committerGitHub <noreply@github.com>2024-09-30 19:31:14 +0200
commit683ab698def755c24cec9987069d25efcf831fc4 (patch)
tree84d0bd8ad2f5d7a00f67050c83520326d947b2fe /candle-transformers/src/models/llava/mod.rs
parent2f49e1b5349f4e56677ec0d3dc3fe98f9cbb87c7 (diff)
downloadcandle-683ab698def755c24cec9987069d25efcf831fc4.tar.gz
candle-683ab698def755c24cec9987069d25efcf831fc4.tar.bz2
candle-683ab698def755c24cec9987069d25efcf831fc4.zip
Add Pixtral. (#2521)
* Add Pixtral. * More pixtral vision encoder. * Sketch a pixtral example. * Sketch a pixtral example. * Better image loading. * Support loading images embedded in safetensor files. * Clippy fixes. * Add the llava multimodal adapter. * Add more of the llava bits. * Add the pixtral config. * More pixtral inference. * Add the text generation bits. * Get the example to work. * Bugfix. * Run some bits of the model in f32. * Blessed version :) * Better rope frequency computations. * README update.
Diffstat (limited to 'candle-transformers/src/models/llava/mod.rs')
-rw-r--r--candle-transformers/src/models/llava/mod.rs2
1 files changed, 1 insertions, 1 deletions
diff --git a/candle-transformers/src/models/llava/mod.rs b/candle-transformers/src/models/llava/mod.rs
index caa8737a..1ed3b50c 100644
--- a/candle-transformers/src/models/llava/mod.rs
+++ b/candle-transformers/src/models/llava/mod.rs
@@ -279,7 +279,7 @@ impl LLaVA {
(),
))?
} else {
- todo!("not implemented in original python LLaVA yet")
+ bail!("not implemented in original python LLaVA yet")
};
let new_image_feature = if mm_patch_merge_type.contains("unpad") {
let new_image_feature = new_image_feature