summaryrefslogtreecommitdiff
path: root/candle-examples
Commit message (Expand)AuthorAgeFilesLines
* Put the onnx example behind a feature flag. (#1276)Laurent Mazare2023-11-061-1/+6
* Add more models to the onnx example. (#1273)Laurent Mazare2023-11-052-8/+29
* [ONNX] Do not generate values for constants. (#1272)Laurent Mazare2023-11-053-0/+68
* Add support for distil whisper (#1245)Laurent Mazare2023-11-021-3/+15
* Add a KV cache to marian decoding. (#1226)Laurent Mazare2023-10-312-10/+15
* Instructions for generating the tokenizer configs for marian-mt. (#1225)Laurent Mazare2023-10-312-0/+1404
* Add support for the marian base model. (#1221)Laurent Mazare2023-10-301-11/+45
* Use the hub files for the marian example. (#1220)Laurent Mazare2023-10-302-17/+60
* Bugfixes for marian-mt. (#1219)Laurent Mazare2023-10-301-4/+3
* PyO3: Better shape handling (#1143)Lukas Kreussel2023-10-291-1/+1
* Marian MT model (#1210)Laurent Mazare2023-10-291-0/+90
* Allow for different behavior between training and eval (#1213)Laurent Mazare2023-10-292-4/+4
* feat: implement VGG13, VGG16 and VGG19 (#1211)drbh2023-10-292-0/+90
* Add DDPG and fix Gym wrapper (#1207)Travis Hammond2023-10-283-25/+549
* Infer the config for llama2-c. (#1208)Laurent Mazare2023-10-282-3/+13
* Move the llama2-c model in transformers. (#1205)Laurent Mazare2023-10-284-712/+3
* Add fuse-conv-bn method for Conv2d (#1196)jamjamjon2023-10-271-7/+2
* Add a quantized variant of llama2.c (#1197)Laurent Mazare2023-10-273-10/+285
* Add support for the phi-hermes finetuned model. (#1192)Laurent Mazare2023-10-271-3/+11
* Use the hub model file when possible. (#1190)Laurent Mazare2023-10-262-5/+68
* Add the jina-bert embeddings model. (#1187)Laurent Mazare2023-10-261-0/+162
* Mention the flash-attention restriction in the readme. (#1158)Laurent Mazare2023-10-231-0/+3
* Add a quantized blip model. (#1155)Laurent Mazare2023-10-221-17/+53
* Handle LongStorage in pytorch checkpoints. (#1152)Laurent Mazare2023-10-221-27/+20
* Add some KV cache to blip. (#1150)Laurent Mazare2023-10-221-7/+7
* Blip attention mask + readme (#1146)Laurent Mazare2023-10-211-0/+19
* Blip fixes (#1145)Laurent Mazare2023-10-211-4/+68
* Add the blip example. (#1144)Laurent Mazare2023-10-211-0/+54
* Make func cloneable. (#1137)Laurent Mazare2023-10-201-1/+1
* Readme updates. (#1134)Laurent Mazare2023-10-201-0/+20
* Add some vision transformers models (#1132)Laurent Mazare2023-10-191-0/+59
* Expose the larger resnets (50/101/152) in the example. (#1131)Laurent Mazare2023-10-192-0/+26
* Add a readme for the resnet example. (#1129)Laurent Mazare2023-10-191-0/+19
* Experiment with resnet (#1128)Laurent Mazare2023-10-191-0/+76
* Add support for Zephyr-7b in the quantized model. (#1124)Laurent Mazare2023-10-181-2/+12
* Add the quantized mpt model. (#1123)Laurent Mazare2023-10-181-5/+36
* Add a mention to the replit-code model in the readme. (#1121)Laurent Mazare2023-10-182-29/+24
* MPT alibi fixes. (#1120)Laurent Mazare2023-10-182-1/+46
* MPT fixes. (#1117)Laurent Mazare2023-10-171-1/+1
* Build alibi bias. (#1115)Laurent Mazare2023-10-171-0/+234
* Formatting tweak. (#1111)Laurent Mazare2023-10-161-3/+9
* Add support for Puffin-Phi-v2. (#1110)Laurent Mazare2023-10-162-3/+30
* Fix the verbose prompt for phi. (#1097)Laurent Mazare2023-10-151-2/+5
* Improve the reshape error messages. (#1096)Laurent Mazare2023-10-151-7/+16
* Add support for phi-1.0 (#1093)Laurent Mazare2023-10-141-12/+47
* Add some reinforcement learning example. (#1090)Laurent Mazare2023-10-147-1/+603
* Convmixer example (#1074)Laurent Mazare2023-10-111-0/+59
* Remove some unusued bits. (#1067)Laurent Mazare2023-10-092-14/+1
* Override the repo for SDXL f16 vae weights. (#1064)Laurent Mazare2023-10-091-2/+13
* Quantized version of StableLM. (#1058)Laurent Mazare2023-10-081-7/+25