summaryrefslogtreecommitdiff
path: root/candle-core/src/quantized/ggml_file.rs
Commit message (Collapse)AuthorAgeFilesLines
* 20241118 docs (#2629)zachcp2024-11-191-1/+1
| | | | | | | | | | | | | | | | | * module docs * varbuilder gguf docs * add a link to gguf files * small additonal mod doc titles * safetensor docs * more core docs * more module docs in canlde_core * 2 more link fixes
* Cuda acceleration for quantized model. (#1754)Laurent Mazare2024-02-251-9/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Boilerplate for the quantized cuda support. * More basic cuda support. * More cuda quantization (quantize on cpu for now). * Add the dequantization bit. * Start adding some dedicated cuda kernels from llama.cpp. * Move the kernel code. * Start interfacing with the kernel. * Tweak the kernel launch params. * Bugfix for quantized metal. * Fix some clippy lints. * Tweak the launch parameters. * Tweak cuda basics to perform a quantized matmul. * Perform the dequantization on the cpu + use cublas for matmul. * Add the dequantization kernel. * Test the qmatmul. * More kernels. * Matmul-vec kernel. * Add a couple kernels. * More dequantization kernels.
* Fixing quantized llama demo on metal. (#1703)Nicolas Patry2024-02-131-0/+3
|
* Quantized GGUF style (#1523)Nicolas Patry2024-01-171-23/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Metal quantized modifications proposal. - Add a device param, wherever needed. - Create new QMetal storage thing that implements QuantizedType. - Update everywhere needed. Fix Python. Fixing examples. Fix: fmt + clippy + stub. Moving everything around. Only missing the actual implems. Fixing everything + adding dequantized kernels. More work. Fixing matmul. Fmt + Clippy Some clippy fixes. Working state. Q2K Metal -> Bugged (also present in GGML). Q4K CPU -> Bugged (present previously, new test catch it). Q5K CPU -> Bugged (present previously). Q8_1 Both -> Never really implemented it seems Q8K metal -> Never implemented in metal Fixing Q2K bug (present in ggml). * Cleanup. * Fix the rebase. * Removing the fences speeds everything up and *is* correct this time... * Cleanup the fence. * After rebase. * Bad code removal. * Rebase after phi2 merge + fix replit default to CPU. * Making the CI happy. * More happy tests. --------- Co-authored-by: Nicolas Patry <nicolas@Nicolass-MacBook-Pro.local>
* Avoid some overflows on wasm32. (#968)Laurent Mazare2023-09-261-1/+7
|
* Tensor -> QTensor conversion (#496)Laurent Mazare2023-08-181-1/+1
| | | | | | | | | | | * Sketch some qmatmul test. * Add the quantization function. * More testing. * Make the test smaller and faster. * Add some shape checking.
* Get the ggml based llama to generate some text. (#464)Laurent Mazare2023-08-161-4/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Add more stats to the ggml example. * Build a quantized model from the file content. * Move the tensor retrieval in the main crate. * Start adding the forward pass. * Add more to the forward pass of the quantized llama. * Apply the attention layers. * Add the sampling loop. * Get the sampling loop to work. * Minor tweak. * Add a quantize/dequantize test. * Bugfix. * Add a comment + swap the order. * Bugfixes.
* Add quantized tensors. (#458)Laurent Mazare2023-08-151-105/+26
| | | | | | | * Add quantized tensors. * Implement the debug trait for QTensor. * Add the QMatMul custom op.
* Split out the quantized file. (#456)Laurent Mazare2023-08-151-0/+294