summaryrefslogtreecommitdiff
path: root/candle-examples/examples
Commit message (Expand)AuthorAgeFilesLines
* Update the Phi model to use the updated architecture. (#1580)Laurent Mazare2024-01-131-11/+35
* Metal: f16 and bf16 where_cond + benchmark (#1545)ivarflakstad2024-01-121-1/+0
* Mention VGG in the readme. (#1573)Laurent Mazare2024-01-121-2/+4
* Pin the revision used for phi-v2 + make it the default. (#1572)Laurent Mazare2024-01-121-4/+3
* Add RepVGG model. (#1561)Jani Monoses2024-01-112-0/+131
* Use bindgen-cuda for the custom-kernel example. (#1536)Laurent Mazare2024-01-072-3/+3
* fix index_pos bug when kv cache is disabled. (#1517)optman2024-01-061-4/+4
* Format properly the Stable Diffusion example run with params (#1511)stano2024-01-011-1/+1
* Do not implement Module for BatchNorm. (#1513)Laurent Mazare2024-01-011-1/+1
* Add support for tiny-llama-1.1b. (#1512)Laurent Mazare2023-12-311-2/+9
* Add Policy Gradient to Reinforcement Learning examples (#1500)s-casci2023-12-304-124/+275
* Fix lints for clippy 1.75. (#1494)Laurent Mazare2023-12-281-1/+1
* Rework the llama example config, add the solar model. (#1485)Laurent Mazare2023-12-261-72/+36
* Use the new hub helper function. (#1484)Laurent Mazare2023-12-262-16/+2
* Helper function to load sharded safetensors files (#1481)Laurent Mazare2023-12-255-65/+10
* Fix the quantized mistral example. (#1478)Laurent Mazare2023-12-251-3/+13
* Support mistral instruct v0.2. (#1475)Laurent Mazare2023-12-232-7/+18
* MMLU evaluation for Phi. (#1474)Laurent Mazare2023-12-231-13/+104
* Fix for mamba 2.8b. (#1472)Laurent Mazare2023-12-231-1/+1
* Support different mamba models. (#1471)Laurent Mazare2023-12-231-7/+52
* Sketch the minimal mamba example. (#1465)Laurent Mazare2023-12-223-0/+458
* Fix a couple typos (#1451)Laurent Mazare2023-12-171-1/+1
* Tweak the readme for phi and the default sample length. (#1450)Laurent Mazare2023-12-162-15/+12
* Mixtral quantized instruct. (#1447)Laurent Mazare2023-12-161-0/+11
* Update the readme to mention mixtral. (#1443)Laurent Mazare2023-12-151-0/+13
* Quantized mixtral model (#1442)Laurent Mazare2023-12-151-1/+12
* Add the Mixtral model. (#1437)Laurent Mazare2023-12-152-0/+288
* Fix phi example (#1436)niu tech2023-12-151-1/+1
* Quantized version for phi-v2. (#1430)Laurent Mazare2023-12-132-6/+31
* Support for phi-2. (#1429)Laurent Mazare2023-12-131-14/+28
* Speed up bert with approx gelu (#1410)Juarez Bochi2023-12-062-3/+52
* Add the leo models to the quantized examples. (#1398)Laurent Mazare2023-12-031-31/+46
* Add more mentions to SDXL Turbo in the readme. (#1397)Laurent Mazare2023-12-031-6/+16
* Stable Diffusion Turbo Support (#1395)Edwin Cheng2023-12-031-31/+90
* Add quantized Starling, fix open-chat prompt (#1393)Lucas de Ávila Martins2023-12-021-6/+36
* Use the llama weight names for the Yi example. (#1381)Laurent Mazare2023-11-271-2/+2
* Distibert (#1366)Odunayo2023-11-242-0/+157
* Fix linspace implementation (#1358)MilkFather2023-11-231-1/+1
* Ensure to copy data to cpu before iterating. (#1360)Marcus Asteborg2023-11-232-1/+4
* Fix OpenChat 3.5 tokenizer (#1347)Lucas de Ávila Martins2023-11-191-1/+3
* Add OpenChat 3.5 to quantized examples (#1346)Lucas de Ávila Martins2023-11-191-7/+39
* Use the whisper-v3 tokenizer now that it has been added. (#1337)Laurent Mazare2023-11-161-6/+8
* fix: address clippy 0.1.74 issues (#1336)drbh2023-11-161-2/+2
* Update readme.md (#1322)Ryan Kopf2023-11-121-1/+1
* Fix pose estimation image path (#1326)Bernardo de Lemos2023-11-121-1/+1
* Add the Yi-6b and Yi-34b models. (#1320)Laurent Mazare2023-11-111-0/+268
* Fix quantized zephyr chat prompt (#1314) (#1317)Michael Leandersson2023-11-111-2/+7
* Add support to UL2 model family (#1300)Juarez Bochi2023-11-092-1/+14
* Add support for TrOCR Model (#1303)Ogundepo Odunayo2023-11-094-0/+302
* Add the mel filters for 128 bins. (#1295)Laurent Mazare2023-11-082-1/+5