| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* add: direction for lstm layer
* lint: remove unused Error import
* refactor: remove unnecessary int assignment to Direction enum:
* refactor: use &'static str type instead of String for direction_str:
* Run cargofmt.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
|
|
|
| |
use candle-nn LSTM
|
| |
|
|
|
|
|
|
|
|
|
| |
* Relax the contiguous check for cuda kernels.
* Ensure contiguity for RNNs.
* Unrelated fix for segment anything.
* Better error message + allow concatenating empty slices.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Encodec model.
* Fixes.
* Add the padding functions.
* Get the LSTM bit to work.
* Get the encodec model to generate some tokens (decoder only for now).
* Minor tweak.
* Minor tweak.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
* Add a GRU layer.
* Fix the n gate computation.
|
|
|
|
|
| |
* Add a LSTM test.
* Clippy.
|
|
|
|
|
|
|
| |
* Add tanh.
* Use tanh in the lstm block.
* Add a test for tanh forward and backward passes.
|
|
* Add the rnn module.
* More LSTM.
* Implement the RNN forward pass.
* More forward pass for LSTM.
|