summaryrefslogtreecommitdiff
path: root/candle-core/src/lib.rs
Commit message (Expand)AuthorAgeFilesLines
* Support for UG kernels. (#2579)Laurent Mazare2024-10-271-1/+1
* Export TensorIndexer public to candle users (#2477)Shengtuo Hu2024-09-131-1/+1
* Stream tensor (#2429)Laurent Mazare2024-08-171-0/+2
* Add a toggle for F16/BF16 accumulation in gemm. (#2141)Laurent Mazare2024-04-291-3/+5
* Add argsort. (#2132)Laurent Mazare2024-04-271-0/+1
* Add StorageRef. (#2113)Laurent Mazare2024-04-231-1/+1
* first commit (#2018)Jorge António2024-04-051-1/+1
* modify access for conv and op to be pub to allow external packages to have cu...Thomas Santerre2024-04-011-2/+2
* Backend refactoring. (#1966)Laurent Mazare2024-03-291-2/+3
* fix minor typo (#1924)Marco Inacio2024-03-291-1/+1
* Preliminary support for inplace ops. (#1921)Laurent Mazare2024-03-231-1/+1
* Prepare for the custom-op extension. (#1892)Laurent Mazare2024-03-211-1/+2
* Optimize the cat operation on contiguous tensors (#1855)Laurent Mazare2024-03-171-0/+1
* Module implementation for options. (#1728)Laurent Mazare2024-02-181-0/+9
* Expose the ndarray trait. (#1586)Laurent Mazare2024-01-141-1/+1
* Implement the module trait directly for QMatMul. (#1372)Laurent Mazare2023-11-251-6/+0
* Refactor to simplify our lives for settings the params in the encoder.Nicolas Patry2023-11-201-0/+2
* Metal part 1 - Scaffolding for metal. (#1308)Nicolas Patry2023-11-101-0/+7
* Allow for different behavior between training and eval (#1213)Laurent Mazare2023-10-291-0/+12
* Add the upblocks. (#853)Laurent Mazare2023-09-141-1/+7
* Remove set_training. (#784)Laurent Mazare2023-09-091-6/+0
* Get the comparison operation to work on scalar values. (#780)Laurent Mazare2023-09-081-0/+1
* Simplify usage of the pool functions. (#662)Laurent Mazare2023-08-291-0/+33
* Move the test-utils bits to a shared place. (#619)Laurent Mazare2023-08-271-0/+1
* Neon intrinsics for the q8_0 vecdot. (#604)Laurent Mazare2023-08-251-0/+3
* Preliminary support for importing PyTorch weights. (#511)Laurent Mazare2023-08-191-0/+1
* Split out the quantized file. (#456)Laurent Mazare2023-08-151-1/+1
* Simd support (#448)Laurent Mazare2023-08-151-1/+1
* Cudnn support (#445)Laurent Mazare2023-08-141-0/+2
* Conv1d optimize (#392)Laurent Mazare2023-08-101-0/+1
* Support the Accelerate BLAS on macOS. (#325)Laurent Mazare2023-08-051-0/+2
* Initial support for reading ggml files. (#311)Laurent Mazare2023-08-021-0/+1
* Add the AdamW optimizer. (#307)Laurent Mazare2023-08-021-1/+1
* Rename the candle crate to candle-core (#301)Laurent Mazare2023-08-021-2/+2
* Add training for the llama2.c example (#296)Laurent Mazare2023-08-011-1/+1
* Simplify Tensor::randn. (#255)Laurent Mazare2023-07-271-1/+1
* Add flash attention (#241)Laurent Mazare2023-07-261-1/+1
* Cleanup some todos. (#226)Laurent Mazare2023-07-231-1/+1
* Custom ops with a single argument (#214)Laurent Mazare2023-07-211-3/+4
* More realistic training setup. (#210)Laurent Mazare2023-07-201-1/+1
* Broadcasting performance optimization (cpu) (#182)Laurent Mazare2023-07-171-1/+1
* Iteration over strided blocks (#175)Laurent Mazare2023-07-151-1/+1
* Add the SGD optimizer (#160)Laurent Mazare2023-07-131-0/+1
* Move the variable creation to the variable module. (#159)Laurent Mazare2023-07-131-1/+1
* Introduce the variables api used for adjusting parameters during the training...Laurent Mazare2023-07-131-0/+2
* Add from_iter and arange, use it in the doctests. (#145)Laurent Mazare2023-07-121-2/+2
* Allow for lazy loading of npz files, use it in llama to reduce memory usage i...Laurent Mazare2023-07-111-1/+1
* Modular backends (#138)Laurent Mazare2023-07-111-2/+3
* Merge pull request #127 from LaurentMazare/tensor_indexingNicolas Patry2023-07-101-0/+2
|\
| * `i(..)` indexing sugar (partial).Nicolas Patry2023-07-101-0/+2