#candle #pytorch #lstm

candle-lstm

optimize HuggingFace candle LSTM in some cases

4 releases

new 0.2.2 Sep 12, 2024
0.2.1 Sep 5, 2024
0.2.0 Aug 25, 2024
0.1.0 Aug 24, 2024

#351 in Machine learning

Download history 289/week @ 2024-08-24 148/week @ 2024-08-31 80/week @ 2024-09-07

517 downloads per month

Custom license

25KB
532 lines

Candle LSTM

Re-implementing Candle LSTM inference to speed inference up, including bidirectional LSTM.

This implementation is ONLY FOR CPU INFERENCE. DO NOT USE IT ON METAL OR CUDA.

Metal and Cuda

I test inference on My Macbook Pro with M2 chip. It is ~5ms slower than Candle on Metal.

I test On RTX4090 with Cuda 12.5. It is ~6x slower than Candle on Cuda.

Test Data

Install Pytorch and run simple.py to generate test data.

  1. lstm_test.pt: Pytorch LSTM with batch_first = False.
  2. lstm_test_batch_first.pt: Pytorch LSTM with batch_first = True.
  3. bi_lstm_test.pt: Pytorch Bidirectional LSTM with batch_first = False.
  4. bi_lstm_test_batch_first.pt: Pytorch Bidirectional LSTM with batch_first = True.

Dependencies

~10MB
~207K SLoC