diff options
author | Laurent Mazare <laurent.mazare@gmail.com> | 2023-09-16 08:22:24 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-09-16 07:22:24 +0100 |
commit | 7cafca835a4bb9a21f3c8111e2f61b7a6b1270fd (patch) | |
tree | 866d39e848afa38104c0a79cc100d6d722bf4312 /README.md | |
parent | 04ca2b9ebd75b065641512d4254850e065699030 (diff) | |
download | candle-7cafca835a4bb9a21f3c8111e2f61b7a6b1270fd.tar.gz candle-7cafca835a4bb9a21f3c8111e2f61b7a6b1270fd.tar.bz2 candle-7cafca835a4bb9a21f3c8111e2f61b7a6b1270fd.zip |
readme tweaks. (#867)
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 23 |
1 files changed, 17 insertions, 6 deletions
@@ -8,7 +8,9 @@ Candle is a minimalist ML framework for Rust with a focus on performance (includ and ease of use. Try our online demos: [whisper](https://huggingface.co/spaces/lmz/candle-whisper), [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2), -[yolo](https://huggingface.co/spaces/lmz/candle-yolo). +[yolo](https://huggingface.co/spaces/lmz/candle-yolo), +[Segment +Anything](https://huggingface.co/spaces/radames/candle-segment-anything-wasm). ## Get started @@ -114,8 +116,7 @@ And then head over to <!--- ANCHOR: useful_libraries ---> ## Useful Libraries -- `candle-lora` - - [`candle-lora`](https://github.com/EricLBuehler/candle-lora) provides a LoRA implementation that conforms to the official `peft` implementation. +- [`candle-lora`](https://github.com/EricLBuehler/candle-lora) provides a LoRA implementation that conforms to the official `peft` implementation. If you have an addition to this list, please submit a pull request. @@ -133,10 +134,20 @@ If you have an addition to this list, please submit a pull request. - CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL. - WASM support, run your models in a browser. - Included models. - - LLMs: LLaMA v1 and v2, Falcon, StarCoder. + - Language Models. + - LLaMA v1 and v2. + - Falcon. + - StarCoder. + - T5. + - Bert. - Whisper (multi-lingual support). - - Stable Diffusion. - - Computer Vision: DINOv2, EfficientNet, yolo-v3, yolo-v8. + - Stable Diffusion v1.5, v2.1, XL v1.0. + - Computer Vision Models. + - DINOv2. + - EfficientNet. + - yolo-v3. + - yolo-v8. + - Segment-Anything Model (SAM). - File formats: load models from safetensors, npz, ggml, or PyTorch files. - Serverless (on CPU), small and fast deployments. - Quantization support using the llama.cpp quantized types. |