summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorLaurent Mazare <laurent.mazare@gmail.com>2024-01-12 09:59:29 +0100
committerGitHub <noreply@github.com>2024-01-12 09:59:29 +0100
commit8e06bfb4fd33f1229a03abee20cc1c07198408b5 (patch)
tree89ea4cf4264e71f3247ab85451460e7a99efc7ac /README.md
parent6242276c0970db6e5805feed4c2ef3b0bf2ba413 (diff)
downloadcandle-8e06bfb4fd33f1229a03abee20cc1c07198408b5.tar.gz
candle-8e06bfb4fd33f1229a03abee20cc1c07198408b5.tar.bz2
candle-8e06bfb4fd33f1229a03abee20cc1c07198408b5.zip
Mention VGG in the readme. (#1573)
Diffstat (limited to 'README.md')
-rw-r--r--README.md5
1 files changed, 4 insertions, 1 deletions
diff --git a/README.md b/README.md
index 93cbccc4..c4f27548 100644
--- a/README.md
+++ b/README.md
@@ -109,6 +109,9 @@ We also provide a some command line based examples using state of the art models
- [DINOv2](./candle-examples/examples/dinov2/): computer vision model trained
using self-supervision (can be used for imagenet classification, depth
evaluation, segmentation).
+- [VGG](./candle-examples/examples/vgg/),
+ [RepVGG](./candle-examples/examples/repvgg): computer vision models.
+- [BLIP](./candle-examples/examples/blip/): image to text model, can be used to
- [BLIP](./candle-examples/examples/blip/): image to text model, can be used to
generate captions for an image.
- [Marian-MT](./candle-examples/examples/marian-mt/): neural machine translation
@@ -204,7 +207,7 @@ If you have an addition to this list, please submit a pull request.
- Image to text.
- BLIP.
- Computer Vision Models.
- - DINOv2, ConvMixer, EfficientNet, ResNet, ViT.
+ - DINOv2, ConvMixer, EfficientNet, ResNet, ViT, VGG, RepVGG.
- yolo-v3, yolo-v8.
- Segment-Anything Model (SAM).
- File formats: load models from safetensors, npz, ggml, or PyTorch files.