summaryrefslogtreecommitdiff
path: root/candle-core/src/quantized/mod.rs
Commit message (Collapse)AuthorAgeFilesLines
* Add a Context trait similar to anyhow::Context. (#2676)Laurent Mazare2024-12-221-2/+2
| | | | | * Add a Context trait similar to anyhow::Context. * Switch two unwrap to context.
* 20241118 docs (#2629)zachcp2024-11-191-0/+1
| | | | | | | | | | | | | | | | | * module docs * varbuilder gguf docs * add a link to gguf files * small additonal mod doc titles * safetensor docs * more core docs * more module docs in canlde_core * 2 more link fixes
* Add a forward_via_f16 method to the qmatmul op. (#2138)Laurent Mazare2024-04-281-0/+19
|
* Add the cuda dequantize f16 kernels. (#2137)Laurent Mazare2024-04-281-4/+43
| | | | | | | | | | | | | * Add the cuda dequantize f16 kernels. * Expose the cuda kernels. * Add some testing + fix. * Test the other cases too. * A few more tests. * Add an environment variable to enable the dequantize f16 + matmul behavior.
* Fix dequantization. (#1823)Laurent Mazare2024-03-081-1/+1
|
* Cuda acceleration for quantized model. (#1754)Laurent Mazare2024-02-251-4/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Boilerplate for the quantized cuda support. * More basic cuda support. * More cuda quantization (quantize on cpu for now). * Add the dequantization bit. * Start adding some dedicated cuda kernels from llama.cpp. * Move the kernel code. * Start interfacing with the kernel. * Tweak the kernel launch params. * Bugfix for quantized metal. * Fix some clippy lints. * Tweak the launch parameters. * Tweak cuda basics to perform a quantized matmul. * Perform the dequantization on the cpu + use cublas for matmul. * Add the dequantization kernel. * Test the qmatmul. * More kernels. * Matmul-vec kernel. * Add a couple kernels. * More dequantization kernels.
* Qmetal tweaks (#1704)Laurent Mazare2024-02-131-91/+12
| | | | | * Add the dummy qmetal backend. * Fix the metal compilation.
* Fixing quantized llama demo on metal. (#1703)Nicolas Patry2024-02-131-0/+12
|
* Quantized GGUF style (#1523)Nicolas Patry2024-01-171-48/+254
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Metal quantized modifications proposal. - Add a device param, wherever needed. - Create new QMetal storage thing that implements QuantizedType. - Update everywhere needed. Fix Python. Fixing examples. Fix: fmt + clippy + stub. Moving everything around. Only missing the actual implems. Fixing everything + adding dequantized kernels. More work. Fixing matmul. Fmt + Clippy Some clippy fixes. Working state. Q2K Metal -> Bugged (also present in GGML). Q4K CPU -> Bugged (present previously, new test catch it). Q5K CPU -> Bugged (present previously). Q8_1 Both -> Never really implemented it seems Q8K metal -> Never implemented in metal Fixing Q2K bug (present in ggml). * Cleanup. * Fix the rebase. * Removing the fences speeds everything up and *is* correct this time... * Cleanup the fence. * After rebase. * Bad code removal. * Rebase after phi2 merge + fix replit default to CPU. * Making the CI happy. * More happy tests. --------- Co-authored-by: Nicolas Patry <nicolas@Nicolass-MacBook-Pro.local>
* Implement the module trait directly for QMatMul. (#1372)Laurent Mazare2023-11-251-2/+2
|
* Better control on the optional dequantization in QMatMul (#1049)Laurent Mazare2023-10-071-7/+28
| | | | | | | * Cosmetic change to the quantized whisper model. * Fix the dequantization. * Add the dequantize all variable.
* Improve the quantized whisper setup. (#1018)Laurent Mazare2023-10-021-10/+19
| | | | | | | * Improve the quantized whisper setup. * Fix the config file paths. * Use the standard matmul where possible.
* simd128 optimized q8_0 vecdot (#972)Laurent Mazare2023-09-271-0/+2
| | | | | * wasm/simd128 version of the quantized q8_0 vecdot. * Add the missing conversion.
* Add a quantized version of the t5 model. (#921)Laurent Mazare2023-09-211-1/+1
|
* Support for quantized tensors in the python api. (#706)Laurent Mazare2023-09-011-3/+11
| | | | | | | | | | | | | | | * Add more pyo3 support. * Add some support for quantized tensors in pyo3. * Add an arc layer on qmatmul. * Add the quantized matmul. * Quantization support. * More quantization support. * Test the python quantization.
* Llama quantization. (#625)Laurent Mazare2023-08-271-0/+4
|
* Add a function to write gguf files. (#585)Laurent Mazare2023-08-241-1/+38
| | | | | | | * Add a function to write gguf files. * More GGUF file writing. * Write the tensor data in GGUF files.
* Preliminary GGUF support. (#557)Laurent Mazare2023-08-231-0/+1
| | | | | * Preliminary GGUF support. * Tensor reading.
* Add quantization support for `q2k`, `q3k`, `q4k` and `q5k` (#524)Lukas Kreussel2023-08-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | * first q2 implementation * First Q4K and Q5K implementations * fix `q2k` and `q5k` * Some first cleanups * run `clippy` on tests * finally implement `q3k` * deactivate `q3k` test on macos * also disable the test on linux * Fix floating bits in `q3k` dequantization * Refactoring pass + reorder quants in file * `fmt` * Re-add `src` asserts and redefine `dst`
* Neon support for quantization. (#519)Laurent Mazare2023-08-191-0/+2
| | | | | | | * Skeleton files for neon support of quantization. * SIMD version for q4 vecdot. * Also simdify the q6k multiplication.
* Add a simple Module trait and implement it for the various nn layers (#500)Laurent Mazare2023-08-181-0/+1
| | | | | | | * Start adding the module trait. * Use the module trait. * Implement module for qmatmul.
* Tensor -> QTensor conversion (#496)Laurent Mazare2023-08-181-3/+40
| | | | | | | | | | | * Sketch some qmatmul test. * Add the quantization function. * More testing. * Make the test smaller and faster. * Add some shape checking.
* Relax the requirements on CustomOp. (#486)Laurent Mazare2023-08-171-3/+3
| | | | | * Relax the requirements on CustomOp. * Simplify the custom-ops when no backward is required.
* Move the avx specific bits to a separate file. (#481)Laurent Mazare2023-08-171-0/+2
|
* Get the ggml based llama to generate some text. (#464)Laurent Mazare2023-08-161-13/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Add more stats to the ggml example. * Build a quantized model from the file content. * Move the tensor retrieval in the main crate. * Start adding the forward pass. * Add more to the forward pass of the quantized llama. * Apply the attention layers. * Add the sampling loop. * Get the sampling loop to work. * Minor tweak. * Add a quantize/dequantize test. * Bugfix. * Add a comment + swap the order. * Bugfixes.
* Add quantized tensors. (#458)Laurent Mazare2023-08-151-1/+113
| | | | | | | * Add quantized tensors. * Implement the debug trait for QTensor. * Add the QMatMul custom op.
* Split out the quantized file. (#456)Laurent Mazare2023-08-151-0/+82