summaryrefslogtreecommitdiff
path: root/candle-examples/examples/mixtral/README.md
blob: aec5c1480704d7653cee207292ad71900d1eaa36 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# candle-mixtral: 8x7b LLM using a sparse mixture of experts.

Mixtral-8x7B-v0.1 is a pretrained generative LLM with 56 billion parameters. 

- [Blog post](https://mistral.ai/news/mixtral-of-experts/) from Mistral announcing the model release.
- [Model card](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the HuggingFace Hub.

## Running the example

```bash
$ cargo run --example mixtral --release  -- --prompt "def print_prime(n): "
def print_prime(n):  # n is the number of prime numbers to be printed
    i = 2
    count = 0
    while (count < n):
        if (isPrime(i)):
            print(i)
            count += 1
        i += 1

def isPrime(n):
    for x in range(2, int(n**0.5)+1):
        if (n % x == 0):
            ...
```