summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorLaurent Mazare <laurent.mazare@gmail.com>2023-10-22 09:44:48 +0100
committerGitHub <noreply@github.com>2023-10-22 09:44:48 +0100
commitdf2f89b6cf897305a566cb08446dd4522d42919a (patch)
tree97b45b414a9fc1bfe83e42fc2014686de8fcde43 /README.md
parent62fc9656173029dbb3a14dc1938a819501242809 (diff)
downloadcandle-df2f89b6cf897305a566cb08446dd4522d42919a.tar.gz
candle-df2f89b6cf897305a566cb08446dd4522d42919a.tar.bz2
candle-df2f89b6cf897305a566cb08446dd4522d42919a.zip
Add some KV cache to blip. (#1150)
* Add some KV cache to blip. * Mention BLIP in the readme.
Diffstat (limited to 'README.md')
-rw-r--r--README.md9
1 files changed, 7 insertions, 2 deletions
diff --git a/README.md b/README.md
index 03c2a1f5..09f15885 100644
--- a/README.md
+++ b/README.md
@@ -99,6 +99,8 @@ We also provide a some command line based examples using state of the art models
- [DINOv2](./candle-examples/examples/dinov2/): computer vision model trained
using self-supervision (can be used for imagenet classification, depth
evaluation, segmentation).
+- [BLIP](./candle-examples/examples/blip/): image to text model, can be used to
+ generate captions for an image.
Run them using commands like:
```
@@ -163,8 +165,11 @@ If you have an addition to this list, please submit a pull request.
- T5.
- Bert.
- Whisper (multi-lingual support).
- - Stable Diffusion v1.5, v2.1, XL v1.0.
- - Wurstchen v2.
+ - Text to image.
+ - Stable Diffusion v1.5, v2.1, XL v1.0.
+ - Wurstchen v2.
+ - Image to text.
+ - BLIP.
- Computer Vision Models.
- DINOv2, ConvMixer, EfficientNet, ResNet, ViT.
- yolo-v3, yolo-v8.