index
:
forks/candle.git
main
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
candle-transformers
/
src
/
models
/
whisper
Commit message (
Expand
)
Author
Age
Files
Lines
*
Fix bug in whisper transformer (#2681)
mert-kurttutan
2024-12-24
1
-0
/
+1
*
Module Docs (#2624)
zachcp
2024-11-18
1
-3
/
+7
*
Documentation Pass for Models (#2617)
zachcp
2024-11-15
1
-0
/
+8
*
Clippy fixes for 1.81.0. (#2461)
Laurent Mazare
2024-09-05
1
-2
/
+2
*
Speaker embeddings computation for metavoice. (#1800)
Laurent Mazare
2024-03-04
1
-1
/
+1
*
feat: support microphone whisper streaming (#1678)
drbh
2024-02-12
2
-0
/
+53
*
feat: support multithread spectrogram and small perf tweaks (#1674)
drbh
2024-02-08
3
-28
/
+150
*
Use candle_nn::embedding instead of local copies in a few models. (#1562)
Jani Monoses
2024-01-10
1
-6
/
+1
*
Use the whisper-v3 tokenizer now that it has been added. (#1337)
Laurent Mazare
2023-11-16
1
-1
/
+1
*
fix: address clippy 0.1.74 issues (#1336)
drbh
2023-11-16
1
-2
/
+1
*
Preliminary support for whisper v3. (#1294)
Laurent Mazare
2023-11-08
2
-3
/
+7
*
Consolidate the with-tracing usage. (#1234)
Laurent Mazare
2023-11-01
1
-27
/
+1
*
Make the whisper model cloneable (#1200)
Laurent Mazare
2023-10-27
2
-1
/
+11
*
Move the common quantized-nn code to a shared module. (#1063)
Laurent Mazare
2023-10-09
1
-42
/
+6
*
Better control on the optional dequantization in QMatMul (#1049)
Laurent Mazare
2023-10-07
1
-6
/
+5
*
Add a quantized variant of whisper (#1017)
Laurent Mazare
2023-10-02
3
-18
/
+424
*
Use softmax-last-dim in whisper. (#810)
Laurent Mazare
2023-09-11
1
-2
/
+2
*
Move some models to candle-transformers so that it's easier to re-use. (#794)
Laurent Mazare
2023-09-10
3
-0
/
+652