summaryrefslogtreecommitdiff
path: root/candle-examples
Commit message (Expand)AuthorAgeFilesLines
* Quantized GGUF style (#1523)Nicolas Patry2024-01-179-35/+43
* Add MobileOne model. (#1595)Jani Monoses2024-01-162-0/+118
* Use the new phi model by default. (#1589)Laurent Mazare2024-01-151-26/+29
* Update the Phi model to use the updated architecture. (#1580)Laurent Mazare2024-01-131-11/+35
* Metal: f16 and bf16 where_cond + benchmark (#1545)ivarflakstad2024-01-121-1/+0
* Mention VGG in the readme. (#1573)Laurent Mazare2024-01-121-2/+4
* Pin the revision used for phi-v2 + make it the default. (#1572)Laurent Mazare2024-01-122-10/+3
* Add RepVGG model. (#1561)Jani Monoses2024-01-112-0/+131
* Use bindgen-cuda for the custom-kernel example. (#1536)Laurent Mazare2024-01-074-236/+20
* Simplifying our internal cargo dependencies. (#1529)Nicolas Patry2024-01-071-6/+6
* fix index_pos bug when kv cache is disabled. (#1517)optman2024-01-061-4/+4
* Format properly the Stable Diffusion example run with params (#1511)stano2024-01-011-1/+1
* Do not implement Module for BatchNorm. (#1513)Laurent Mazare2024-01-011-1/+1
* Add support for tiny-llama-1.1b. (#1512)Laurent Mazare2023-12-311-2/+9
* Add Policy Gradient to Reinforcement Learning examples (#1500)s-casci2023-12-304-124/+275
* Fix lints for clippy 1.75. (#1494)Laurent Mazare2023-12-281-1/+1
* Bump the crate version to 0.3.3. (#1490)Laurent Mazare2023-12-281-6/+6
* Rework the llama example config, add the solar model. (#1485)Laurent Mazare2023-12-261-72/+36
* Use the new hub helper function. (#1484)Laurent Mazare2023-12-262-16/+2
* Helper function to load sharded safetensors files (#1481)Laurent Mazare2023-12-257-67/+40
* Fix the quantized mistral example. (#1478)Laurent Mazare2023-12-251-3/+13
* Support mistral instruct v0.2. (#1475)Laurent Mazare2023-12-232-7/+18
* MMLU evaluation for Phi. (#1474)Laurent Mazare2023-12-232-13/+105
* Fix for mamba 2.8b. (#1472)Laurent Mazare2023-12-231-1/+1
* Support different mamba models. (#1471)Laurent Mazare2023-12-231-7/+52
* Sketch the minimal mamba example. (#1465)Laurent Mazare2023-12-223-0/+458
* Merge pull request #1318 from huggingface/metal4Nicolas Patry2023-12-201-0/+1
|\
| * Starting to fix some tests.Nicolas Patry2023-11-301-0/+1
* | Bump the crate version to 0.3.2. (#1452)Laurent Mazare2023-12-171-6/+6
* | Fix a couple typos (#1451)Laurent Mazare2023-12-171-1/+1
* | Tweak the readme for phi and the default sample length. (#1450)Laurent Mazare2023-12-162-15/+12
* | Mixtral quantized instruct. (#1447)Laurent Mazare2023-12-161-0/+11
* | Update the readme to mention mixtral. (#1443)Laurent Mazare2023-12-151-0/+13
* | Quantized mixtral model (#1442)Laurent Mazare2023-12-151-1/+12
* | Add the Mixtral model. (#1437)Laurent Mazare2023-12-152-0/+288
* | Fix phi example (#1436)niu tech2023-12-151-1/+1
* | Quantized version for phi-v2. (#1430)Laurent Mazare2023-12-132-6/+31
* | Support for phi-2. (#1429)Laurent Mazare2023-12-131-14/+28
* | Speed up bert with approx gelu (#1410)Juarez Bochi2023-12-062-3/+52
* | Add nvcc ccbin support to examples (#1401)emka2023-12-031-0/+7
* | Add compute cap env support to examples (#1400)emka2023-12-031-2/+11
* | Add the leo models to the quantized examples. (#1398)Laurent Mazare2023-12-031-31/+46
* | Add more mentions to SDXL Turbo in the readme. (#1397)Laurent Mazare2023-12-031-6/+16
* | Stable Diffusion Turbo Support (#1395)Edwin Cheng2023-12-031-31/+90
* | Add quantized Starling, fix open-chat prompt (#1393)Lucas de Ávila Martins2023-12-021-6/+36
|/
* Use the llama weight names for the Yi example. (#1381)Laurent Mazare2023-11-271-2/+2
* Distibert (#1366)Odunayo2023-11-242-0/+157
* Fix linspace implementation (#1358)MilkFather2023-11-231-1/+1
* Ensure to copy data to cpu before iterating. (#1360)Marcus Asteborg2023-11-232-1/+4
* Fix OpenChat 3.5 tokenizer (#1347)Lucas de Ávila Martins2023-11-191-1/+3