index
:
forks/candle.git
main
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
candle-examples
Commit message (
Expand
)
Author
Age
Files
Lines
*
Add a flag to force running the quantized model on CPUs. (#1778)
Laurent Mazare
2024-02-28
1
-1
/
+5
*
Support more modes in the encodec example. (#1777)
Laurent Mazare
2024-02-28
7
-641
/
+159
*
Make some dependencies optional in the examples. (#1776)
Laurent Mazare
2024-02-28
1
-2
/
+14
*
Encodec encoding demo. (#1775)
Laurent Mazare
2024-02-28
1
-1
/
+16
*
Encodec model. (#1771)
Laurent Mazare
2024-02-27
4
-0
/
+114
*
Add an option to split the prompt. (#1766)
Laurent Mazare
2024-02-27
1
-1
/
+14
*
add quantized rwkv v5 model (#1743)
Jack Shih
2024-02-25
1
-4
/
+38
*
Cuda acceleration for quantized model. (#1754)
Laurent Mazare
2024-02-25
1
-0
/
+1
*
Fix the eos token for gemma. (#1753)
Laurent Mazare
2024-02-24
1
-2
/
+2
*
Fix typo in README (#1740)
Daniel Varga
2024-02-22
1
-1
/
+1
*
Make the cache for the llama model explicit too. (#1745)
Laurent Mazare
2024-02-22
1
-3
/
+3
*
Explicit caching in llama2.c.
laurent
2024-02-22
2
-20
/
+21
*
Add the Gemma models. (#1741)
Laurent Mazare
2024-02-21
2
-0
/
+281
*
Use the tokenizer-output-stream in the llama example. (#1715)
Laurent Mazare
2024-02-15
4
-20
/
+17
*
Add a readme for rwkv. (#1712)
Laurent Mazare
2024-02-14
1
-0
/
+17
*
Custom tokenizer for rwkv. (#1711)
Laurent Mazare
2024-02-14
1
-38
/
+13
*
Add the RWKV model (v5). (#1707)
Laurent Mazare
2024-02-14
1
-0
/
+290
*
Add ConvNeXt-V2 and smaller model variants. (#1709)
Jani Monoses
2024-02-14
2
-15
/
+40
*
Detach the tensors on batch-norm eval. (#1702)
Laurent Mazare
2024-02-13
2
-5
/
+5
*
feat: support microphone whisper streaming (#1678)
drbh
2024-02-12
3
-0
/
+816
*
Improved mamba model optimized for inference (#1694)
Laurent Mazare
2024-02-11
3
-0
/
+319
*
Fixing the qwen tokenizer location. (#1693)
Nicolas Patry
2024-02-11
1
-3
/
+1
*
docs: add trocr examples (#1692)
Todsaporn Banjerdkit
2024-02-10
2
-2
/
+11
*
Mention TrOCR in the readmes. (#1691)
Laurent Mazare
2024-02-10
1
-1
/
+7
*
Use the repo config for trocr rather than hardcoding it + small tweaks. (#1689)
Laurent Mazare
2024-02-10
1
-40
/
+62
*
ChatGLM custom tokenizer. (#1687)
Laurent Mazare
2024-02-10
1
-1
/
+3
*
Add the custom tokenizer. (#1686)
Laurent Mazare
2024-02-09
1
-1
/
+3
*
Use the proper endoftext token for gwen. (#1685)
Laurent Mazare
2024-02-09
1
-2
/
+2
*
Add the Qwen2 model (#1684)
Laurent Mazare
2024-02-09
1
-0
/
+281
*
Add the ChatGLM model. (#1237)
Laurent Mazare
2024-02-09
1
-0
/
+235
*
Fix clippy lints for 1.76. (#1682)
Laurent Mazare
2024-02-08
1
-1
/
+1
*
Fix token generation in bilingual models (non-English outputs) (#1668)
Guoqing Bao
2024-02-06
2
-1
/
+2
*
Update docs to reflect current usage of example (#1610)
Tarek
2024-02-04
1
-4
/
+33
*
Quantized support for stable-lm2. (#1654)
Laurent Mazare
2024-02-04
2
-6
/
+27
*
Add StableLM-2, StableLM Code and Zephyr variants (#1650)
Jani Monoses
2024-02-03
2
-10
/
+56
*
Supports more audio formats (#1628)
Hubert Shelley
2024-02-03
3
-12
/
+81
*
Add ConvNeXt model. (#1604)
Jani Monoses
2024-02-03
2
-0
/
+124
*
Quantized GGUF style (#1523)
Nicolas Patry
2024-01-17
9
-35
/
+43
*
Add MobileOne model. (#1595)
Jani Monoses
2024-01-16
2
-0
/
+118
*
Use the new phi model by default. (#1589)
Laurent Mazare
2024-01-15
1
-26
/
+29
*
Update the Phi model to use the updated architecture. (#1580)
Laurent Mazare
2024-01-13
1
-11
/
+35
*
Metal: f16 and bf16 where_cond + benchmark (#1545)
ivarflakstad
2024-01-12
1
-1
/
+0
*
Mention VGG in the readme. (#1573)
Laurent Mazare
2024-01-12
1
-2
/
+4
*
Pin the revision used for phi-v2 + make it the default. (#1572)
Laurent Mazare
2024-01-12
2
-10
/
+3
*
Add RepVGG model. (#1561)
Jani Monoses
2024-01-11
2
-0
/
+131
*
Use bindgen-cuda for the custom-kernel example. (#1536)
Laurent Mazare
2024-01-07
4
-236
/
+20
*
Simplifying our internal cargo dependencies. (#1529)
Nicolas Patry
2024-01-07
1
-6
/
+6
*
fix index_pos bug when kv cache is disabled. (#1517)
optman
2024-01-06
1
-4
/
+4
*
Format properly the Stable Diffusion example run with params (#1511)
stano
2024-01-01
1
-1
/
+1
*
Do not implement Module for BatchNorm. (#1513)
Laurent Mazare
2024-01-01
1
-1
/
+1
[next]