summaryrefslogtreecommitdiff
path: root/candle-transformers/src/models/whisper
Commit message (Collapse)AuthorAgeFilesLines
* Fix bug in whisper transformer (#2681)mert-kurttutan2024-12-241-0/+1
| | | | | | | | | | | * Fix bug in whisper transformer - due to num_threads going to zero in single threaded case * Apply rustfmt. --------- Co-authored-by: Laurent <laurent.mazare@gmail.com>
* Module Docs (#2624)zachcp2024-11-181-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * update whisper * update llama2c * update t5 * update phi and t5 * add a blip model * qlamma doc * add two new docs * add docs and emoji * additional models * openclip * pixtral * edits on the model docs * update yu * update a fe wmore models * add persimmon * add model-level doc * names * update module doc * links in heira * remove empty URL * update more hyperlinks * updated hyperlinks * more links * Update mod.rs --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
* Documentation Pass for Models (#2617)zachcp2024-11-151-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
* Clippy fixes for 1.81.0. (#2461)Laurent Mazare2024-09-051-2/+2
| | | | | * Clippy fixes for 1.81.0. * Another fix.
* Speaker embeddings computation for metavoice. (#1800)Laurent Mazare2024-03-041-1/+1
| | | | | * Speaker embeddings computation for metavoice. * Compute the speaker embeddings.
* feat: support microphone whisper streaming (#1678)drbh2024-02-122-0/+53
| | | | | | | | | | | | | | | * feat: support microphone whisper streaming * fix: cleanup print stmts and adjust how input is read * fix: remove incorrect comment * feat: split into new example and simplify * fix: feature flag example file * fix: fmt fixes * feat: simplify and remove redundant files
* feat: support multithread spectrogram and small perf tweaks (#1674)drbh2024-02-083-28/+150
| | | | | | | | | | | * feat: support multithread spectrogram and small perf tweaks * feat: clippy improvement for loop variable * fix: add back speed up scale down logic * fix: readd mirroring logic * feat: prefer scoped thread and simplify/improve logic/traits
* Use candle_nn::embedding instead of local copies in a few models. (#1562)Jani Monoses2024-01-101-6/+1
|
* Use the whisper-v3 tokenizer now that it has been added. (#1337)Laurent Mazare2023-11-161-1/+1
| | | | | * Use the whisper-v3 tokenizer now that it has been added. * Use the appropriate nospeech token.
* fix: address clippy 0.1.74 issues (#1336)drbh2023-11-161-2/+1
| | | | - clippy::needless-borrows-for-generic-args - clippy::reserve-after-initialization
* Preliminary support for whisper v3. (#1294)Laurent Mazare2023-11-082-3/+7
| | | | | * Preliminary support for whisper v3. * Add the missing files.
* Consolidate the with-tracing usage. (#1234)Laurent Mazare2023-11-011-27/+1
|
* Make the whisper model cloneable (#1200)Laurent Mazare2023-10-272-1/+11
| | | | | | | * Add a quantized variant of llama2.c * Clippy fixes. * Make the whisper model cloneable.
* Move the common quantized-nn code to a shared module. (#1063)Laurent Mazare2023-10-091-42/+6
|
* Better control on the optional dequantization in QMatMul (#1049)Laurent Mazare2023-10-071-6/+5
| | | | | | | * Cosmetic change to the quantized whisper model. * Fix the dequantization. * Add the dequantize all variable.
* Add a quantized variant of whisper (#1017)Laurent Mazare2023-10-023-18/+424
| | | | | | | | | | | * Add the quantized-whisper model. * Quantized the whisper model. * Adapt the whisper example to handle quantization. * Add the quantized flag. * Load the proper weights.
* Use softmax-last-dim in whisper. (#810)Laurent Mazare2023-09-111-2/+2
|
* Move some models to candle-transformers so that it's easier to re-use. (#794)Laurent Mazare2023-09-103-0/+652
* Move some models to candle-transformers so that they can be shared. * Also move falcon. * Move Llama. * Move whisper (partial).