index
:
forks/candle.git
main
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
candle-transformers
/
src
Commit message (
Expand
)
Author
Age
Files
Lines
*
Add a flag to select the dtype used in metavoice. (#1805)
Laurent Mazare
2024-03-05
2
-5
/
+13
*
Speaker embeddings computation for metavoice. (#1800)
Laurent Mazare
2024-03-04
2
-23
/
+109
*
Add an initial Segformer implementation (#1617)
Jiayu Liu
2024-03-03
2
-0
/
+706
*
More metavoice tweaks. (#1796)
Laurent Mazare
2024-03-03
1
-1
/
+1
*
Metavoice - first cut (#1717)
Laurent Mazare
2024-03-02
2
-0
/
+879
*
Rustfmt fix. (#1788)
Laurent Mazare
2024-03-02
2
-3
/
+10
*
Update StableLM config (#1787)
Frkri
2024-03-02
2
-12
/
+12
*
EfficientVit (MSRA) model (#1783)
Jani Monoses
2024-03-01
2
-0
/
+461
*
add models of rwkv v6 and quantized rwkv v6 (#1781)
Jack Shih
2024-03-01
3
-0
/
+629
*
Add the StarCoder2 model. (#1779)
Laurent Mazare
2024-02-28
2
-0
/
+348
*
Encodec encoding demo. (#1775)
Laurent Mazare
2024-02-28
1
-1
/
+2
*
Apply dilations in the encodec model. (#1772)
Laurent Mazare
2024-02-27
1
-19
/
+69
*
Encodec model. (#1771)
Laurent Mazare
2024-02-27
2
-0
/
+719
*
Avoid tensor copying in the quantized example. (#1770)
Laurent Mazare
2024-02-27
1
-4
/
+8
*
add quantized rwkv v5 model (#1743)
Jack Shih
2024-02-25
3
-2
/
+288
*
Tweak the VarMap set type. (#1758)
Laurent Mazare
2024-02-25
2
-9
/
+9
*
Make the cache for the llama model explicit too. (#1745)
Laurent Mazare
2024-02-22
1
-32
/
+38
*
Explicit caching in llama2.c.
laurent
2024-02-22
2
-55
/
+78
*
Support for attention bias in gemma + refactor things a bit. (#1744)
Laurent Mazare
2024-02-22
5
-39
/
+18
*
Add the Gemma models. (#1741)
Laurent Mazare
2024-02-21
2
-0
/
+381
*
Make the r, k, v tensors contiguous. (#1719)
Laurent Mazare
2024-02-16
1
-3
/
+3
*
Custom tokenizer for rwkv. (#1711)
Laurent Mazare
2024-02-14
1
-0
/
+92
*
Add the RWKV model (v5). (#1707)
Laurent Mazare
2024-02-14
3
-2
/
+319
*
Add ConvNeXt-V2 and smaller model variants. (#1709)
Jani Monoses
2024-02-14
1
-36
/
+174
*
Fixing quantized llama demo on metal. (#1703)
Nicolas Patry
2024-02-13
1
-13
/
+15
*
feat: support microphone whisper streaming (#1678)
drbh
2024-02-12
2
-0
/
+53
*
Improved mamba model optimized for inference (#1694)
Laurent Mazare
2024-02-11
2
-0
/
+212
*
Support sinusoidal embeddings in trocr. (#1690)
Laurent Mazare
2024-02-10
1
-12
/
+56
*
Use the repo config for trocr rather than hardcoding it + small tweaks. (#1689)
Laurent Mazare
2024-02-10
2
-13
/
+16
*
Remove the unused pragma in vit + handle the final layernorm. (#1688)
Laurent Mazare
2024-02-10
1
-7
/
+9
*
Add the Qwen2 model (#1684)
Laurent Mazare
2024-02-09
2
-0
/
+378
*
Add the ChatGLM model. (#1237)
Laurent Mazare
2024-02-09
2
-0
/
+594
*
feat: support multithread spectrogram and small perf tweaks (#1674)
drbh
2024-02-08
3
-28
/
+150
*
Quantized support for stable-lm2. (#1654)
Laurent Mazare
2024-02-04
1
-4
/
+9
*
make llama derive clone (#1648)
Daniel Clough
2024-02-04
1
-2
/
+8
*
Add StableLM-2, StableLM Code and Zephyr variants (#1650)
Jani Monoses
2024-02-03
1
-6
/
+21
*
Update mixformer.rs (#1601)
Bayang
2024-02-03
1
-1
/
+1
*
Add ConvNeXt model. (#1604)
Jani Monoses
2024-02-03
2
-0
/
+202
*
Quantized GGUF style (#1523)
Nicolas Patry
2024-01-17
3
-24
/
+33
*
Add MobileOne model. (#1595)
Jani Monoses
2024-01-16
2
-0
/
+334
*
Fix the rotary embeddings for the new phi implementation. (#1582)
Laurent Mazare
2024-01-13
1
-18
/
+16
*
Update the Phi model to use the updated architecture. (#1580)
Laurent Mazare
2024-01-13
2
-0
/
+366
*
Add RepVGG model. (#1561)
Jani Monoses
2024-01-11
2
-0
/
+307
*
Use candle_nn::embedding instead of local copies in a few models. (#1562)
Jani Monoses
2024-01-10
5
-31
/
+6
*
Do not implement Module for BatchNorm. (#1513)
Laurent Mazare
2024-01-01
5
-15
/
+14
*
Fix lints for clippy 1.75. (#1494)
Laurent Mazare
2023-12-28
1
-1
/
+1
*
add config_amazon_mistral_lite (#1493)
Daniel Clough
2023-12-28
1
-0
/
+18
*
feat: add clear_kv_cache to mistral and qmistral models (#1464)
drbh
2023-12-21
2
-0
/
+28
*
make fn name generic (#1459)
Daniel Clough
2023-12-21
1
-1
/
+2
*
add fn config_chat_ml (#1458)
Daniel Clough
2023-12-20
1
-0
/
+19
[next]