index
:
forks/candle.git
main
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
candle-transformers
Commit message (
Expand
)
Author
Age
Files
Lines
...
*
fix: qwen2 lm_head loading #2443 (#2445)
ilookee
2024-08-23
1
-1
/
+1
*
Add FastViT model. (#2444)
Jani Monoses
2024-08-23
2
-0
/
+513
*
Fix for parler-tts, do not add the last slice of padding tokens. (#2442)
Laurent Mazare
2024-08-22
1
-1
/
+0
*
Add the DAC model. (#2433)
Laurent Mazare
2024-08-19
4
-1
/
+383
*
parler-tts support (#2431)
Laurent Mazare
2024-08-18
2
-0
/
+453
*
Add support for gemma-2. (#2425)
Laurent Mazare
2024-08-17
2
-0
/
+450
*
Fix the device for the bert attention mask. (#2414)
Laurent Mazare
2024-08-14
1
-1
/
+2
*
Add Based LLM from Hazy Research. (#2411)
Jani Monoses
2024-08-12
2
-0
/
+590
*
Soft Non-Maximum Suppression (#2400)
Matthew O'Malley-Nichols
2024-08-10
2
-0
/
+280
*
Add the MMDiT model of Stable Diffusion 3 (#2397)
Czxck001
2024-08-05
6
-0
/
+763
*
add models support and example for THUDM/glm-4 (#2362)
唐璜
2024-08-05
2
-0
/
+596
*
Support for mistral-nemo. (#2396)
Laurent Mazare
2024-08-04
1
-5
/
+12
*
Simplify handling of flux modulations. (#2394)
Laurent Mazare
2024-08-04
1
-46
/
+88
*
Add the flux model for image generation. (#2390)
Laurent Mazare
2024-08-04
5
-0
/
+1145
*
Fix cargo fmt. (#2383)
Laurent Mazare
2024-08-01
1
-0
/
+1
*
Jina Bert Example fix and more configuration (#2191)
Joan Fontanals
2024-08-01
1
-0
/
+30
*
Add Hiera vision model. (#2382)
Jani Monoses
2024-08-01
2
-0
/
+303
*
bert attention mask (#1934)
Zheng Li
2024-08-01
1
-17
/
+32
*
Add support for Llama 3.1 (#2359)
Eric Buehler
2024-07-26
14
-50
/
+125
*
feat(candle-transformers/models/codegeex4-9b): add codegeex4-9 (#2334)
donjuanplatinum
2024-07-21
2
-0
/
+597
*
add quantized qwen2 (#2329)
Zhuo Jinggang
2024-07-12
2
-0
/
+324
*
Add Mobilenet v4 (#2325)
Jani Monoses
2024-07-09
2
-0
/
+801
*
Add EVA-02 model ( https://arxiv.org/abs/2303.11331 ) (#2311)
v-espitalier
2024-07-07
2
-0
/
+419
*
Beit: Add the gen_relative_position_index() function (#2306)
v-espitalier
2024-07-04
1
-26
/
+63
*
Add Beit model ( https://arxiv.org/abs/2106.08254 ) (#2305)
v-espitalier
2024-07-01
2
-0
/
+368
*
Add DINOv2Reg4 + PlantCLEF2024 (#2293)
v-espitalier
2024-06-29
2
-0
/
+282
*
Depth Anything v2 (#2279)
Jeroen Vlek
2024-06-24
3
-0
/
+632
*
Fix the fast bf16 gemm cublas kernels. (#2274)
Laurent Mazare
2024-06-18
1
-2
/
+1
*
Support for the new Qwen2 models. (#2257)
Laurent Mazare
2024-06-07
1
-2
/
+6
*
Add LLaVA support (#2234)
chenwanqq
2024-06-03
7
-0
/
+776
*
Add Debug, Clone, Deserialize to moondream config (#2222)
Dave Lage
2024-05-28
1
-0
/
+1
*
Enable the new layer-norm. (#2213)
Laurent Mazare
2024-05-24
1
-8
/
+4
*
Avoid a contiguous call in the quantized phi 3 model. (#2209)
Laurent Mazare
2024-05-23
1
-1
/
+1
*
Simplify the KvCache api. (#2207)
Laurent Mazare
2024-05-23
1
-7
/
+1
*
Use flash-attn in gemma. (#2195)
Laurent Mazare
2024-05-18
1
-18
/
+44
*
Support flash-attn in quantized phi3. (#2194)
Laurent Mazare
2024-05-18
1
-10
/
+40
*
Add a slice_set op. (#2193)
Laurent Mazare
2024-05-18
1
-22
/
+19
*
Support embedding model gte-Qwen1.5-7B-instruct (#2190)
Yin Guobing
2024-05-16
1
-15
/
+62
*
Separate quantized phi-3 implementation. (#2157)
Laurent Mazare
2024-05-04
3
-4
/
+306
*
Bump the version number to 0.5.1. (#2155)
Laurent Mazare
2024-05-03
1
-1
/
+1
*
Add argsort. (#2132)
Laurent Mazare
2024-04-27
2
-43
/
+21
*
Add Olmo models (#2127)
Isotr0py
2024-04-26
2
-0
/
+338
*
Add the phi-3 model. (#2120)
Laurent Mazare
2024-04-24
2
-0
/
+330
*
Use the faster rms-norm kernel for llama. (#2107)
Laurent Mazare
2024-04-22
1
-0
/
+5
*
Updated quantized phi model (#2099)
Laurent Mazare
2024-04-21
2
-0
/
+289
*
Derive clone and debug traits for Moondream model (#2100)
Santiago Medina
2024-04-21
1
-0
/
+1
*
Small cleanups to the llama multi-process example. (#2098)
Laurent Mazare
2024-04-20
1
-1
/
+7
*
Fix for gemma MQA. (#2091)
Laurent Mazare
2024-04-19
1
-2
/
+3
*
Use faster rotary embeddings for llama like models. (#2087)
Laurent Mazare
2024-04-18
1
-11
/
+6
*
Llama v3. (#2085)
Laurent Mazare
2024-04-18
1
-0
/
+10
[prev]
[next]