summaryrefslogtreecommitdiff
path: root/candle-kernels/src
Commit message (Expand)AuthorAgeFilesLines
* Import the ggml_cuda_dp4a function. (#2628)Laurent Mazare2024-11-191-33/+44
* Improved launch config for layer-norm/rms-norm. (#2591)Laurent Mazare2024-11-041-8/+6
* Add the layernorm specialized op. (#2212)Laurent Mazare2024-05-241-0/+84
* More efficient cuda implementation for ConvTranspose1d. (#2211)Laurent Mazare2024-05-241-0/+65
* Fix sigmoid gradient calculation and move sigmoid into a specialized op (#2114)MilkFather2024-04-291-0/+9
* Add the cuda dequantize f16 kernels. (#2137)Laurent Mazare2024-04-281-37/+75
* Add argsort. (#2132)Laurent Mazare2024-04-272-0/+89
* Add more QMMV cuda kernels. (#2077)Laurent Mazare2024-04-181-0/+324
* Add the mmv kernels for small batch sizes. (#2075)Laurent Mazare2024-04-161-10/+254
* Faster kernels for quantized matmul on cuda (#2060)Laurent Mazare2024-04-151-11/+118
* Add the full quantized matmul kernels for cuda. (#2057)Laurent Mazare2024-04-141-0/+1071
* Add the rope THD kernel. (#2014)Laurent Mazare2024-04-051-5/+43
* Add support for "sign" on tensors (#2012)Thomas Santerre2024-04-041-0/+9
* Relax the contiguous check for cuda kernels. (#2000)Laurent Mazare2024-04-031-1/+1
* More ggml cuda kernels (#1977)Laurent Mazare2024-04-011-75/+1014
* Use the new rope kernel in mistral. (#1937)Laurent Mazare2024-03-251-2/+2
* Contiguous variant of the rope kernel. (#1929)Laurent Mazare2024-03-251-6/+34
* Fast kernels for rotary embeddings. (#1928)Laurent Mazare2024-03-241-0/+29
* Add cast_bf16_x/cast_x_bf16 when CUDA_ARCH<800 but CUDA_VERSION >= 11000 (#1919)yinqiwen2024-03-231-0/+12
* Support scatter/index_add with i64 indices for f16 (#1915)Daniƫl de Kok2024-03-221-0/+2
* Custom op for RmsNorm (#1890)Laurent Mazare2024-03-211-0/+65
* Cuda backend optimization (#1886)Laurent Mazare2024-03-204-7/+7
* Optimize the cat operation on contiguous tensors (#1855)Laurent Mazare2024-03-171-1/+29
* Add a cuda kernel for dequantizing q8_0. (#1804)Laurent Mazare2024-03-051-0/+24
* Handle Q5_0 and Q5_1 quants in cuda.laurent2024-02-291-7/+9
* Cuda kernel for dequantizing q8k. (#1760)Laurent Mazare2024-02-261-0/+35
* Cuda acceleration for quantized model. (#1754)Laurent Mazare2024-02-252-0/+1537
* Fix the silu cuda kernel. (#1710)Laurent Mazare2024-02-141-1/+1
* feat: add silu activation function (#1706)OlivierDehaene2024-02-141-0/+9
* ConvTranspose1d cuda support. (#1697)Laurent Mazare2024-02-121-2/+77
* Rework the cuda casting bits. (#1112)Laurent Mazare2023-10-171-31/+54
* fix: fix index_select cuda kernel for src target dim different than ids dim w...Gonzalo2023-10-051-6/+8
* Add the rounding operators. (#1030)Laurent Mazare2023-10-042-0/+24
* fix: add missing gpu fill_* (#996)Gonzalo2023-09-291-0/+9
* Optimize the index-select cuda kernel. (#976)Laurent Mazare2023-09-281-14/+8
* Add the missing kernel. (#955)Laurent Mazare2023-09-241-0/+1
* cuda cast i64 (#925)Gonzalo2023-09-211-0/+10
* Add an erf based gelu op (#900)Laurent Mazare2023-09-192-0/+25
* im2col version of the conv1d kernel. (#815)Laurent Mazare2023-09-111-1/+70
* im2col based conv2d (#802)Laurent Mazare2023-09-101-0/+89
* Add a dedicated cuda kernel for softmax. (#746)Laurent Mazare2023-09-051-0/+55
* Add tanh. (#675)Laurent Mazare2023-08-301-0/+4
* Support dilation in conv-transpose2d. (#671)Laurent Mazare2023-08-301-3/+3
* Add the powf op. (#664)Laurent Mazare2023-08-291-0/+4
* Fix the dilated convolutions. (#659)Laurent Mazare2023-08-291-2/+2
* Dilated convolutions (#657)Laurent Mazare2023-08-291-6/+12
* Cuda conv transpose (#645)Laurent Mazare2023-08-281-0/+88
* Let's keep the dirty code on its own.Nicolas Patry2023-08-251-2/+25
* Intermediary float cast is necessary for cuda 11.8Nicolas Patry2023-08-251-2/+2
* `static_cast` ?Nicolas Patry2023-08-251-2/+2