summaryrefslogtreecommitdiff
path: root/candle-nn/tests
Commit message (Expand)AuthorAgeFilesLines
* Add one-hot/cold encoding (#1489)Ryan Tate2024-01-011-0/+120
* Do not implement Module for BatchNorm. (#1513)Laurent Mazare2024-01-011-2/+2
* [Breaking] Add training to batchnorm with exponential moving average (#1504)nkoppel2023-12-301-0/+11
* Add Binary Cross Entropy With Logit Loss to nn crate (#1157)Ogundepo Odunayo2023-10-231-0/+47
* Add a custom softmax implementation. (#744)Laurent Mazare2023-09-051-0/+10
* Avoid some redundant clone. (#731)Laurent Mazare2023-09-041-2/+2
* Add the optimizer trait. (#702)Laurent Mazare2023-09-011-3/+3
* Add a GRU layer. (#688)Laurent Mazare2023-08-311-0/+44
* Add a python variant for the lstm test. (#682)Laurent Mazare2023-08-301-0/+15
* Add a LSTM test. (#681)Laurent Mazare2023-08-301-0/+42
* Move the test-utils bits to a shared place. (#619)Laurent Mazare2023-08-277-62/+13
* Some fixes for yolo-v3. (#529)Laurent Mazare2023-08-201-5/+9
* Add a yolo-v3 example. (#528)Laurent Mazare2023-08-206-6/+29
* Add a batch normalization layer (#508)Laurent Mazare2023-08-181-0/+70
* Add a simple Module trait and implement it for the various nn layers (#500)Laurent Mazare2023-08-183-3/+3
* Fix the tests for mkl. (#437)Laurent Mazare2023-08-141-2/+4
* Fixes for the stable diffusion example. (#342)Laurent Mazare2023-08-081-2/+2
* Implement group-norm. (#334)Laurent Mazare2023-08-071-0/+103
* Add the AdamW optimizer. (#307)Laurent Mazare2023-08-023-15/+100
* Llama more training (#297)Laurent Mazare2023-08-011-2/+2
* Add the cross-entropy loss. (#287)Laurent Mazare2023-07-311-1/+4
* Make the nll op closer to the pytorch version + add a test. (#286)Laurent Mazare2023-07-311-0/+31
* Softmax numerical stability. (#267)Laurent Mazare2023-07-281-0/+62
* Simplify the parameters used by sum and sum_keepdim. (#165)Laurent Mazare2023-07-141-2/+2
* Use the same default as pytorch for sum. (#164)Laurent Mazare2023-07-131-2/+2
* Add the pytorch version of the linear regression as a comment. (#163)Laurent Mazare2023-07-131-0/+24
* Add the gradient for reduce-sum. (#162)Laurent Mazare2023-07-131-2/+26
* Add the SGD optimizer (#160)Laurent Mazare2023-07-131-0/+19
* Add some layer-norm tests. (#121)Laurent Mazare2023-07-101-0/+43