summaryrefslogtreecommitdiff
path: root/candle-nn/examples/cpu_benchmarks.rs
Commit message (Collapse)AuthorAgeFilesLines
* Optimize the cat operation on contiguous tensors (#1855)Laurent Mazare2024-03-171-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Add a specialized kernel for copy2d. * Move the cat operations. * Avoid transpositions in cat. * Bugfix. * Bugfix for the cuda kernel. * Add a benchmark. * Add more testing. * Test fix. * Faster kernel. * Add the missing kernel. * Tweak the test. * Add a metal kernel. * Fix for the metal kernel. * Get the tests to pass on metal. * Also use this opportunity to fix the metal kernel for ELU. * Add some bf16 kernels. * Clippy fixes.
* Quantized GGUF style (#1523)Nicolas Patry2024-01-171-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Metal quantized modifications proposal. - Add a device param, wherever needed. - Create new QMetal storage thing that implements QuantizedType. - Update everywhere needed. Fix Python. Fixing examples. Fix: fmt + clippy + stub. Moving everything around. Only missing the actual implems. Fixing everything + adding dequantized kernels. More work. Fixing matmul. Fmt + Clippy Some clippy fixes. Working state. Q2K Metal -> Bugged (also present in GGML). Q4K CPU -> Bugged (present previously, new test catch it). Q5K CPU -> Bugged (present previously). Q8_1 Both -> Never really implemented it seems Q8K metal -> Never implemented in metal Fixing Q2K bug (present in ggml). * Cleanup. * Fix the rebase. * Removing the fences speeds everything up and *is* correct this time... * Cleanup the fence. * After rebase. * Bad code removal. * Rebase after phi2 merge + fix replit default to CPU. * Making the CI happy. * More happy tests. --------- Co-authored-by: Nicolas Patry <nicolas@Nicolass-MacBook-Pro.local>
* Implement the module trait directly for QMatMul. (#1372)Laurent Mazare2023-11-251-1/+1
|
* Add a matvec cpu benchmark. (#1076)Laurent Mazare2023-10-121-3/+22
|
* Convmixer (#1073)Laurent Mazare2023-10-111-2/+2
| | | | | | | | | | | * Only optimize float tensors. * Use full tensors for zeros and ones. * Add a benchmark for the matmul slowness. * Add the convmixer model. * Proper adaptive pooling.
* Improve the quantized whisper setup. (#1018)Laurent Mazare2023-10-021-1/+1
| | | | | | | * Improve the quantized whisper setup. * Fix the config file paths. * Use the standard matmul where possible.
* Bugfix for the conv2d cpu kernel. (#820)Laurent Mazare2023-09-111-1/+1
|
* im2col based conv2d (#802)Laurent Mazare2023-09-101-16/+69
| | | | | | | | | | | | | * im2col implementation for conv2d. * Fix for the im2col implementation to match the current conv2d. * Small optimization. * Add a cuda kernel. * Handle arbitrary layouts. * Im2Col cuda code.
* Bugfix so that im2col produce the same results as conv2d. (#801)Laurent Mazare2023-09-101-1/+5
|
* Add an im2col based benchmark. (#800)Laurent Mazare2023-09-101-2/+71
| | | | | * Add an im2col based benchmark. * Reshape the final result.
* Add a custom softmax implementation. (#744)Laurent Mazare2023-09-051-0/+176
* Add a custom softmax implementation. * Add softmaxlastdim to the benchmarks. * And add a test. * Support more dtypes. * Polish the code. * Use the slow implementation on cuda. * Add a todo for the cuda kernel.