#recursive #overflow #macro


Macro to make recursive function run on the heap (i.e. no stack overflow).

3 releases

new 0.0.3 Nov 25, 2021
0.0.2 Nov 25, 2021
0.0.1 Nov 25, 2021

#174 in Rust patterns

MPL-2.0 license

726 lines


crates.io crates.io


#[decurse::decurse] // 👈 Slap this on your recursive function and stop worrying about stack overflow!
fn factorial(x: u32) -> u32 {
	if x == 0 {
	} else {
		x * factorial(x - 1)

println!("{}", factorial(10));

More examples (fibonacci, DFS, ...) are in the examples directory.


The macros provided by this crate make your recursive functions run on the heap instead. Works on stable Rust (1.56 at the time of writing).

Here's an example to illustrate the mechanism.

fn factorial(x: u32) -> u32 {
	// 🅐
	if x == 0 {
	} else {
		let rec = {
			// 🅑
			factorial(x - 1)

		// 🅒
		rec * x

If we call factorial(1), the following would happen:

  • We run the code in the function starting at point 🅐.
  • When we reach point 🅑, we don't immediately call factorial(0), instead, we save the information that we have to call factorial(0)1.
  • Once that information is saved, we pause the execution of factorial(1), storing the state on the heap2.
  • We then execute factorial(0). During this, the "stack state" of factorial(1) is not on the stack. It is stored on the heap.
  • Once we got the result of factorial(0), we resume factorial(1) giving it the result of factorial(0)3.
  • The execution continues at point 🅒 and on.

1 To send this information out of the function, we put it in a thread local.

2 This is accomplished by converting your function into an async function, and awaiting to pause it. It is somewhat of a hack using async/await.

3 This again use thread local.

Click to show an example of what the macro expands to
fn factorial(arg_0: u32) -> u32 {
	async fn factorial(x: u32) -> u32 {
		if x == 0 {
		} else {
			x * ({
				// Save what we have to do next.
				::decurse::for_macro_only::sound::set_next(factorial(x - 1));
				// Pause the current function.
				// Once resumed, get the result.


This crate provides two macros: decurse and decurse_unsound. Simply put them on top of your function.

fn some_function(...) -> ...
fn some_function(...) -> ...


This is the version you should prefer. This does not use unsafe code and is thus safe.

However, it does not work on functions with lifetimed types (&T, SomeStruct<'a>, etc.) in the argument or return type.


This macro uses unsafe code in very dangerous ways. I am far from confident that it is safe, so I'm calling it unsound. However, I have yet to come up with an example to demonstrate unsoundness, so there is a small chance that this might actually be sound, so for brave souls, try it out!

This version does not suffer from the limitation of the safe version. Arguments and return type can be lifetimed just as in any functions.


  • As mentioned, the safe variant only works on functions without lifetimed type arguments or lifetimed return type.

    • The owning_ref crate is great for working around this.
    • You can use the "unsound" variant, of course. But it might cause problems.
  • This is not tail-call optimization. Also you can still blow up your heap (although it is much harder).

  • One function only. Alternating recursion (f calls g then g calls f) is not supported. Calling the same function but with different generic parameters is not supported.

  • Async function are not supported.

  • Struct methods are not supported. Freestanding function only.

  • The macro only understand recursive calls that are written literally.

     // This would work:
     recursive(x - 1);
     // The macro wouldn't understand this:
     let f = recursive;
     f(x - 1);
  • The function must have no more than 12 arguments.

    • This is actually a limitation of the pfn crate.
  • impl Trait in argument position is not supported.

    • You can use normal, named, generics.
  • This is still very experimental. The safe variant doesn't contain unsafe code but even then you should still be careful.

  • Multithreading is not supported.


Benchmarking recursive linear search. See the code.

Vec Size Time (decurse) (s) Time (normal) (s) decurse/normal
20000 0.65 0.19 3.45
40000 1.29 0.43 2.96
60000 2.11 0.78 2.69
80000 2.81 1.24 2.27
100000 3.49 Stack Overflow N/A
120000 4.32 Stack Overflow N/A
140000 5.23 Stack Overflow N/A
160000 5.99 Stack Overflow N/A
180000 6.72 Stack Overflow N/A

decurse version is about 3x slower 😦😦😦.

Same benchmark with the slow(8723) call uncommented for both linear_search and stack_linear_search.

Vec Size Time (decurse) (s) Time (normal) (s) decurse/normal
20000 0.70 2.66 0.26
40000 1.46 5.39 0.27
60000 2.11 8.25 0.26
80000 2.79 10.85 0.26
100000 3.56 Stack Overflow N/A
120000 4.47 Stack Overflow N/A
140000 5.23 Stack Overflow N/A
160000 6.16 Stack Overflow N/A
180000 6.57 Stack Overflow N/A

decurse version is about 4x faster 🤔🤔🤔

I expected the slow() call to just bring the two versions closer. It is very strange that the decurse version can become faster. Maybe the stack usage of the normal version makes it harder for CPU to cache things?

Anyway, the takeaway here is do your own benchmark on your own use case. The recursive linear search implemented here isn't even something anyone would use!

I would still love to see what the numbers look like for your use cases. Please share!


This blog post by hurryabit inspired me to make this. The main idea is basically the same. Mine is more hacky because I want to avoid generators (which require nightly and won't be stabilized anytime soon), so I use async/await instead.


~18K SLoC