#memoization #cache #memo #unsync #hash-map

fn-memo

A library for creating the memoization of a function

5 releases (3 stable)

1.2.0 Aug 15, 2019
1.1.1 Feb 14, 2019
0.2.0 Feb 5, 2019
0.1.0 Jan 27, 2019

#235 in Caching

Download history 197/week @ 2024-07-25 142/week @ 2024-08-01 175/week @ 2024-08-08 98/week @ 2024-08-15 108/week @ 2024-08-22 156/week @ 2024-08-29 142/week @ 2024-09-05 122/week @ 2024-09-12 93/week @ 2024-09-19 102/week @ 2024-09-26 152/week @ 2024-10-03 201/week @ 2024-10-10 298/week @ 2024-10-17 113/week @ 2024-10-24 94/week @ 2024-10-31 82/week @ 2024-11-07

599 downloads per month

MIT/Apache

16KB
274 lines

FnMemo

Build Status

A Rust library for creating the memoization of a function.

Documentation: API reference

Usage

Add the following dependency to your Cargo.toml:

[dependencies]
fn-memo = "1.2"

By default fn-memo includes synchronized APIs, which introduces related dependencies. If you only use the memoization in one thread and want to reduce dependencies, use following configuration.

[dependencies]
fn-memo = { version = "1.2", default-features = false }

lib.rs:

A library for creating the memoization of a function that uses cache to improve the performance.

You can create the memoization with unsync::memoize function. It uses a HashMap for caching.

use fn_memo::{FnMemo, unsync, recur_fn::direct};

let mul_2 = unsync::memoize(direct(|n| {
    println!("Evaluating {}", n);
    n * 2
}));

assert_eq!(0, mul_2.call(0)); // Output "Evaluating 0."
assert_eq!(4, mul_2.call(2)); // Output "Evaluating 2."
assert_eq!(10, mul_2.call(5)); // Output "Evaluating 5."
assert_eq!(4, mul_2.call(2)); // No output. The result is cached.
mul_2.clear_cache();
assert_eq!(4, mul_2.call(2)); // Output "Evaluating 2."

The memoize function takes a RecurFn argument, which allows you to memoize a recursive function and the result of each recursion will be cached. See the API reference of recur-fn for details.

use fn_memo::{FnMemo, unsync, recur_fn::recur_fn};

let fib = unsync::memoize(recur_fn(|fib, n: usize| {
    println!("Evaluating {}", n);
    if n <= 1 {
        n
    } else {
        fib(n - 1) + fib(n - 2)
    }
}));

assert_eq!(55, fib.call(10));
assert_eq!(5, fib.call(5));

The code above will output the evaluation from 0 to 10. Each of them is outputed only once.

For the sequence (i.e. the function that takes an usize argument), you can also use unsync::memoize_seq. It uses a Vec as a bucket to cache. It has a better performance and requires the memory proportional to the largest argument that have cached.

You can costumize the data structure of the cache by implementing unsync::Cache trait and create memoization with unsync::Memo::new method.

The APIs under unsync namespace are for single-thread. The result of unsync::memoize does not Sync even the cached function does.

use std::{sync::Arc, thread};
use fn_memo::{FnMemo, unsync, recur_fn::direct};

let f = Arc::new(unsync::memoize(direct(|n: i32| n)));
thread::spawn(move || { f }); // Compile Error

The synchronized memoization APIs are under sync namespace.

use fn_memo::{FnMemo, sync::chashmap, recur_fn::direct};
use std::thread;
use std::sync::Arc;

let mul_2 = Arc::new(chashmap::memoize(direct(|n| {
    println!("Evaluating {}", n);
    n * 2
})));

let mut threads = Vec::new();
for _ in 0..4 {
    threads.push(thread::spawn({
        let mul_2 = Arc::clone(&mul_2);
        move || {
            for n in 0..10 {
                assert_eq!(n*2, mul_2.call(n));
            }
        }
    }));
}
for thread in threads {
    thread.join().unwrap();
}

The code above will output the evaluation from 0 to 9. Each of them is outputed only once.

Dependencies

~11–445KB