| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
* Fix bug in whisper transformer
- due to num_threads going to zero
in single threaded case
* Apply rustfmt.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* update whisper
* update llama2c
* update t5
* update phi and t5
* add a blip model
* qlamma doc
* add two new docs
* add docs and emoji
* additional models
* openclip
* pixtral
* edits on the model docs
* update yu
* update a fe wmore models
* add persimmon
* add model-level doc
* names
* update module doc
* links in heira
* remove empty URL
* update more hyperlinks
* updated hyperlinks
* more links
* Update mod.rs
---------
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* links in chinese_clip
* links for clip model
* add mod docs for flux and llava
* module doc for MMDIT and MIMI
* add docs for a few more modesl
* mod docs for bert naser and beit
* add module docs for convmixer colpali codegeex and chatglm
* add another series of moddocs
* add fastvit-llama2_c
* module docs mamba -> mobileone
* module docs from moondream-phi3
* mod docs for quantized and qwen
* update to yi
* fix long names
* Update llama2_c.rs
* Update llama2_c_weights.rs
* Fix the link for mimi + tweaks
---------
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
|
|
|
|
|
| |
* Clippy fixes for 1.81.0.
* Another fix.
|
|
|
|
|
| |
* Speaker embeddings computation for metavoice.
* Compute the speaker embeddings.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* feat: support microphone whisper streaming
* fix: cleanup print stmts and adjust how input is read
* fix: remove incorrect comment
* feat: split into new example and simplify
* fix: feature flag example file
* fix: fmt fixes
* feat: simplify and remove redundant files
|
|
|
|
|
|
|
|
|
|
|
| |
* feat: support multithread spectrogram and small perf tweaks
* feat: clippy improvement for loop variable
* fix: add back speed up scale down logic
* fix: readd mirroring logic
* feat: prefer scoped thread and simplify/improve logic/traits
|
| |
|
|
|
|
|
| |
* Use the whisper-v3 tokenizer now that it has been added.
* Use the appropriate nospeech token.
|
|
|
|
| |
- clippy::needless-borrows-for-generic-args
- clippy::reserve-after-initialization
|
|
|
|
|
| |
* Preliminary support for whisper v3.
* Add the missing files.
|
| |
|
|
|
|
|
|
|
| |
* Add a quantized variant of llama2.c
* Clippy fixes.
* Make the whisper model cloneable.
|
| |
|
|
|
|
|
|
|
| |
* Cosmetic change to the quantized whisper model.
* Fix the dequantization.
* Add the dequantize all variable.
|
|
|
|
|
|
|
|
|
|
|
| |
* Add the quantized-whisper model.
* Quantized the whisper model.
* Adapt the whisper example to handle quantization.
* Add the quantized flag.
* Load the proper weights.
|
| |
|
|
* Move some models to candle-transformers so that they can be shared.
* Also move falcon.
* Move Llama.
* Move whisper (partial).
|