index
:
forks/candle.git
main
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
candle-transformers
/
src
/
models
/
with_tracing.rs
Commit message (
Expand
)
Author
Age
Files
Lines
*
Use the faster rms-norm kernel for llama. (#2107)
Laurent Mazare
2024-04-22
1
-0
/
+5
*
Use a common with_tracing::RmsNorm in a few models. (#1871)
Jani Monoses
2024-03-18
1
-0
/
+21
*
Expose some helper functions to create quantized models. (#1837)
Laurent Mazare
2024-03-12
1
-0
/
+6
*
Support for attention bias in gemma + refactor things a bit. (#1744)
Laurent Mazare
2024-02-22
1
-0
/
+6
*
Share the layer-norm implementation. (#1248)
Laurent Mazare
2023-11-03
1
-0
/
+31
*
Marian MT model (#1210)
Laurent Mazare
2023-10-29
1
-0
/
+7
*
Remove the unused pragma and properly apply the bias. (#1147)
Laurent Mazare
2023-10-22
1
-0
/
+8
*
Add the blip image captioning model (#1140)
Laurent Mazare
2023-10-20
1
-2
/
+2
*
Make some model cloneable. (#1125)
Laurent Mazare
2023-10-18
1
-3
/
+4
*
Improve the quantized whisper setup. (#1018)
Laurent Mazare
2023-10-02
1
-1
/
+1
*
Add the quantized mixformer model. (#953)
Laurent Mazare
2023-09-24
1
-0
/
+32
*
Tracing for the phi model (#936)
Laurent Mazare
2023-09-23
1
-0
/
+78