summaryrefslogtreecommitdiff
path: root/candle-examples
Commit message (Expand)AuthorAgeFilesLines
...
* Fix FLUX.1 weights (#2457)Eugene Hauptmann2024-08-291-3/+3
* MobileCLIP models S1 and S2 (#2454)Jani Monoses2024-08-294-10/+250
* FastViT fixes. (#2452)Jani Monoses2024-08-281-5/+5
* Add FastViT model. (#2444)Jani Monoses2024-08-232-0/+122
* Fix for parler-tts, do not add the last slice of padding tokens. (#2442)Laurent Mazare2024-08-221-2/+21
* silero-vad v5 example (#2321)shua2024-08-223-0/+216
* Update README.md (#2435)Laurent Mazare2024-08-191-1/+3
* Add a readme for the parler-tts example. (#2434)Laurent Mazare2024-08-193-30/+21
* Add the DAC model. (#2433)Laurent Mazare2024-08-192-7/+20
* parler-tts support (#2431)Laurent Mazare2024-08-182-0/+204
* Fix the marian tokenizer importer. (#2426)Laurent Mazare2024-08-171-4/+16
* Add support for gemma-2. (#2425)Laurent Mazare2024-08-172-22/+74
* Apply rustfmt. (#2421)Laurent Mazare2024-08-161-1/+0
* Fix build issue in EOS Token in llama-multiprocess (#2420)Hadi2024-08-161-2/+11
* Add Based LLM from Hazy Research. (#2411)Jani Monoses2024-08-122-0/+295
* Fix issues in the encodec example README.md (#2407)Joel Nises2024-08-102-1/+1
* Add the import script for the T5 tokenizer. (#2399)Laurent Mazare2024-08-051-0/+6
* add models support and example for THUDM/glm-4 (#2362)唐璜2024-08-052-0/+332
* Support for mistral-nemo. (#2396)Laurent Mazare2024-08-041-7/+14
* Support the flux-dev model too. (#2395)Laurent Mazare2024-08-041-9/+37
* Add the flux model for image generation. (#2390)Laurent Mazare2024-08-043-0/+201
* Fix cargo fmt. (#2383)Laurent Mazare2024-08-011-14/+19
* Jina Bert Example fix and more configuration (#2191)Joan Fontanals2024-08-011-12/+28
* Add Hiera vision model. (#2382)Jani Monoses2024-08-012-0/+117
* Enable BF16 on metal. (#2380)Laurent Mazare2024-08-011-4/+2
* Use BF16 on metal when possible. (#2378)Laurent Mazare2024-08-011-5/+1
* bert attention mask (#1934)Zheng Li2024-08-011-2/+10
* Add support for Llama 3.1 (#2359)Eric Buehler2024-07-263-8/+26
* onnx: fix pad, unsqueeze (#2317)shua2024-07-2313-14/+14
* fix clip example title (#2345)Caio Petrucci Rosa2024-07-231-1/+1
* feat(candle-transformers/models/codegeex4-9b): add codegeex4-9 (#2334)donjuanplatinum2024-07-212-0/+348
* Pin the revision used by moondream. (#2340)Laurent Mazare2024-07-181-7/+15
* Add mathstral in the examples. (#2339)Laurent Mazare2024-07-181-0/+3
* add quantized qwen2 (#2329)Zhuo Jinggang2024-07-122-0/+317
* Add Mobilenet v4 (#2325)Jani Monoses2024-07-093-17/+137
* Add EVA-02 model ( https://arxiv.org/abs/2303.11331 ) (#2311)v-espitalier2024-07-072-0/+103
* Beit: Add the gen_relative_position_index() function (#2306)v-espitalier2024-07-041-1/+1
* Add Beit model ( https://arxiv.org/abs/2106.08254 ) (#2305)v-espitalier2024-07-012-0/+99
* make up for the missing last token output of phi2 example (#2299)Czxck0012024-06-291-0/+4
* Add DINOv2Reg4 + PlantCLEF2024 (#2293)v-espitalier2024-06-293-0/+113
* Depth Anything v2 (#2279)Jeroen Vlek2024-06-244-0/+257
* Support for the new Qwen2 models. (#2257)Laurent Mazare2024-06-071-10/+26
* Add LLaVA support (#2234)chenwanqq2024-06-035-0/+791
* Simplify the KvCache api. (#2207)Laurent Mazare2024-05-231-1/+0
* Add Phi-3 Medium (#2205)Jani Monoses2024-05-231-6/+13
* Use flash-attn in gemma. (#2195)Laurent Mazare2024-05-181-1/+4
* Support flash-attn in quantized phi3. (#2194)Laurent Mazare2024-05-181-1/+10
* Add a slice_set op. (#2193)Laurent Mazare2024-05-181-1/+1
* Support embedding model gte-Qwen1.5-7B-instruct (#2190)Yin Guobing2024-05-163-1/+198
* Allow the threshold argumet to be negative in the segment-anything example (#...Daniel Varga2024-05-151-1/+1