summaryrefslogtreecommitdiff
path: root/candle-examples
Commit message (Expand)AuthorAgeFilesLines
* Add the optimizer trait. (#702)Laurent Mazare2023-09-012-2/+3
* Tweak some quantized args (#692)Laurent Mazare2023-08-311-5/+14
* Interactive mode for the quantized model. (#690)Laurent Mazare2023-08-312-55/+109
* Fix the accelerate build (#678)Laurent Mazare2023-08-301-4/+3
* Mnist training dropout (#677)Laurent Mazare2023-08-301-7/+11
* Add some documentation. (#673)Laurent Mazare2023-08-302-6/+6
* Neon optimized vecdot (#666)Laurent Mazare2023-08-292-364/+371
* Simplify usage of the pool functions. (#662)Laurent Mazare2023-08-293-11/+13
* Add a convnet training example. (#661)Laurent Mazare2023-08-291-1/+104
* Dilated convolutions (#657)Laurent Mazare2023-08-295-0/+8
* Upgrading hf-hub.Nicolas Patry2023-08-291-4/+4
* Merge pull request #439 from huggingface/training_hub_datasetNicolas Patry2023-08-292-100/+9
|\
| * Fix clippy + save_image.Nicolas Patry2023-08-291-0/+37
| * Remove image dep.Nicolas Patry2023-08-281-1/+0
| * Fix deps.Nicolas Patry2023-08-281-1/+0
| * Re-enable local dir for mnist.Nicolas Patry2023-08-281-1/+9
| * Cleanup:Nicolas Patry2023-08-281-231/+0
| * Training:Nicolas Patry2023-08-283-7/+81
| * Better training+hubNicolas Patry2023-08-281-0/+23
* | Use multiple transformer layer in the same cross-attn blocks. (#653)Laurent Mazare2023-08-294-22/+43
* | Preliminary support for SDXL. (#647)Laurent Mazare2023-08-294-57/+253
|/
* Repeat-penalty in the falcon example. (#634)Laurent Mazare2023-08-281-1/+33
* Remove some dead-code annotations. (#629)Laurent Mazare2023-08-278-62/+5
* Bump the crate version + update CHANGELOG. (#628)Laurent Mazare2023-08-271-5/+5
* VarBuilder cleanup (#627)Laurent Mazare2023-08-274-25/+24
* Add some optional repeat penalty. (#623)Laurent Mazare2023-08-272-17/+23
* Merge pull request #600 from huggingface/codellama_gpu_supportNicolas Patry2023-08-254-49/+93
|\
| * s/panic/bail/Nicolas Patry2023-08-252-4/+4
| * Adding support for codellama in examples.Nicolas Patry2023-08-254-47/+91
* | Cleanup the pose reporting code. (#605)Laurent Mazare2023-08-251-65/+58
* | Add some configurable legend for yolo detection. (#603)Laurent Mazare2023-08-253-1/+44
* | Move the yolo model bits in a separate file. (#602)Laurent Mazare2023-08-254-749/+805
* | More support for pose estimation in yolo-v8. (#599)Laurent Mazare2023-08-253-16/+164
|/
* Generic implementation of vecdot for q80. (#596)Laurent Mazare2023-08-251-5/+23
* Add the pose estimation head for yolo. (#589)Laurent Mazare2023-08-241-6/+104
* Use the hub weights for efficientnet. (#573)Laurent Mazare2023-08-231-2/+12
* Add Efficientnet (#572)Laurent Mazare2023-08-231-0/+419
* Move the imagenet specific bits to a separate file. (#571)Laurent Mazare2023-08-233-1024/+1026
* Trace softmax (#568)Laurent Mazare2023-08-231-3/+8
* Add some group parameter to convolutions. (#566)Laurent Mazare2023-08-239-13/+37
* Get the rms epsilon from GGUF. (#565)Laurent Mazare2023-08-231-8/+10
* Fix the quantized example. (#564)Laurent Mazare2023-08-231-2/+2
* add chat models in quantized example (#551)cksac2023-08-231-0/+18
* GGUF support in the quantized model. (#559)Laurent Mazare2023-08-231-45/+143
* GQA support in the quantized model. (#555)Laurent Mazare2023-08-222-6/+32
* Put the transcribe token before the language one. (#553)Laurent Mazare2023-08-221-3/+3
* Improve the aspect ratio handling on yolo-v8. (#549)Laurent Mazare2023-08-221-14/+35
* Move the yolo shared bits to a common place. (#548)Laurent Mazare2023-08-226-181/+113
* Sketch the yolo wasm example. (#546)Laurent Mazare2023-08-221-4/+0
* Add some llama-v2 variants. (#545)Laurent Mazare2023-08-221-3/+22