summaryrefslogtreecommitdiff
path: root/candle-transformers
Commit message (Expand)AuthorAgeFilesLines
* Support Skip Layer Guidance (SLG) for Stable Diffusion 3.5 Medium (#2590)Czxck0012024-11-011-4/+22
* Lazy upcasting for t5. (#2589)Laurent Mazare2024-10-301-3/+48
* Support sd3.5 medium and MMDiT-X (#2587)Czxck0012024-10-302-23/+217
* Stable diffusion 3.5 support. (#2578)Laurent Mazare2024-10-272-1/+43
* use softmax_last_dim (metal and cuda kernel) in llama attention layer (#2572)Zack Angelo2024-10-231-1/+2
* Enable stable-diffusion 3 on metal. (#2560)Laurent Mazare2024-10-141-2/+1
* Adds support for Stella_en_v5 embedding model - 1.5B variant (#2551)Anubhab Bandyopadhyay2024-10-132-0/+400
* fix: Allow marian configs to deserialize from json. (#2556)Mikarific2024-10-131-1/+2
* Add Stable Diffusion 3 Example (#2558)Czxck0012024-10-137-33/+158
* feat: intergrate chinese clip and add example (#2555)SethWen2024-10-104-0/+1134
* Add BertForMaskedLM to support SPLADE Models (#2550)Akshay Ballal2024-10-071-0/+97
* Add ColPali (#2524)Akshay Ballal2024-10-014-1/+103
* Pixtral polishing. (#2522)Laurent Mazare2024-09-301-0/+26
* Add Pixtral. (#2521)Laurent Mazare2024-09-306-5/+436
* Add PaliGemma. (#2519)Laurent Mazare2024-09-293-0/+130
* Paligemma siglip vision config (#2518)Laurent Mazare2024-09-291-0/+54
* Add the SigLIP model. (#2515)Laurent Mazare2024-09-285-13/+617
* Remove some extra whitelines. (#2513)Laurent Mazare2024-09-282-5/+0
* Add some llama-3.2 examples. (#2508)Laurent Mazare2024-09-262-1/+13
* Quantized version of flux. (#2500)Laurent Mazare2024-09-264-6/+490
* Add a RotatingKVCache. (#2493)Laurent Mazare2024-09-231-32/+7
* Adding Granite 7b Instruct model example (#2487)Juan Gomez2024-09-212-0/+459
* Add the mimi audio-tokenizer. (#2488)Laurent Mazare2024-09-207-0/+2593
* Clippy fixes for 1.81.0. (#2461)Laurent Mazare2024-09-0511-19/+19
* MobileCLIP models S1 and S2 (#2454)Jani Monoses2024-08-294-0/+358
* FastViT fixes. (#2452)Jani Monoses2024-08-281-3/+3
* fix: qwen2 lm_head loading #2443 (#2445)ilookee2024-08-231-1/+1
* Add FastViT model. (#2444)Jani Monoses2024-08-232-0/+513
* Fix for parler-tts, do not add the last slice of padding tokens. (#2442)Laurent Mazare2024-08-221-1/+0
* Add the DAC model. (#2433)Laurent Mazare2024-08-194-1/+383
* parler-tts support (#2431)Laurent Mazare2024-08-182-0/+453
* Add support for gemma-2. (#2425)Laurent Mazare2024-08-172-0/+450
* Fix the device for the bert attention mask. (#2414)Laurent Mazare2024-08-141-1/+2
* Add Based LLM from Hazy Research. (#2411)Jani Monoses2024-08-122-0/+590
* Soft Non-Maximum Suppression (#2400)Matthew O'Malley-Nichols2024-08-102-0/+280
* Add the MMDiT model of Stable Diffusion 3 (#2397)Czxck0012024-08-056-0/+763
* add models support and example for THUDM/glm-4 (#2362)唐璜2024-08-052-0/+596
* Support for mistral-nemo. (#2396)Laurent Mazare2024-08-041-5/+12
* Simplify handling of flux modulations. (#2394)Laurent Mazare2024-08-041-46/+88
* Add the flux model for image generation. (#2390)Laurent Mazare2024-08-045-0/+1145
* Fix cargo fmt. (#2383)Laurent Mazare2024-08-011-0/+1
* Jina Bert Example fix and more configuration (#2191)Joan Fontanals2024-08-011-0/+30
* Add Hiera vision model. (#2382)Jani Monoses2024-08-012-0/+303
* bert attention mask (#1934)Zheng Li2024-08-011-17/+32
* Add support for Llama 3.1 (#2359)Eric Buehler2024-07-2614-50/+125
* feat(candle-transformers/models/codegeex4-9b): add codegeex4-9 (#2334)donjuanplatinum2024-07-212-0/+597
* add quantized qwen2 (#2329)Zhuo Jinggang2024-07-122-0/+324
* Add Mobilenet v4 (#2325)Jani Monoses2024-07-092-0/+801
* Add EVA-02 model ( https://arxiv.org/abs/2303.11331 ) (#2311)v-espitalier2024-07-072-0/+419
* Beit: Add the gen_relative_position_index() function (#2306)v-espitalier2024-07-041-26/+63