33 releases (4 breaking)

0.5.6 Dec 2, 2024
0.5.5 Oct 25, 2024
0.5.4 Sep 12, 2024
0.3.0 Jul 31, 2024

#135 in Machine learning

Download history 41/week @ 2024-08-23 613/week @ 2024-08-30 111/week @ 2024-09-06 172/week @ 2024-09-13 40/week @ 2024-09-20 57/week @ 2024-09-27 28/week @ 2024-10-04 10/week @ 2024-10-11 3/week @ 2024-10-18 110/week @ 2024-10-25 20/week @ 2024-11-01 18/week @ 2024-11-08 13/week @ 2024-11-15 9/week @ 2024-11-22 179/week @ 2024-11-29 257/week @ 2024-12-06

458 downloads per month
Used in ai00-core

MIT/Apache

1MB
4K SLoC

kbnf

crates.io docs.rs PyPI CI

This crate provides a constrained decoding engine which ensures that a language model's output adheres strictly to the format defined by KBNF (Koishi's BNF), an enhanced variant of EBNF. KBNF includes features that enhance usability, notably embeddable regular expressions.

If you are interested in the design and implementation behind this crate, you may want to check out my blog.

Features

  • Supports full context free grammar with worst case O(m*n^3) time complexity, where n is the generated text length and m is the vocabulary size.
  • Asymptotically fastest for subclasses of context free grammar.
    • Guarantees worst case O(m*n) time complexity for every LR(k) grammar(which includes almost all practical grammars)
    • Achieves O(n) time complexity with caching eventually given that n has a fixed upper bound, or the grammar is regular.
  • Vocabulary-independent.
    • BPE, BBPE, you-name-it, all types of vocabulary are supported.
  • Supports UTF-8 characters in grammar.
  • Embeddable regular expressions.

Documentation

Documentation and examples.

Add to your project

Simply add it to your Cargo.toml or run cargo add kbnf in your command line.

Performance

One of the goals of this crate is for the constrained decoding engine to be "fast." This can be interpreted both theoretically and practically.

Theoretically, this crate is designed to provide the asymptotically fastest algorithms for each subclass of context free grammar. By implementing an Earley recognizer with Leo optimization, this crate has successfully achieve linear time complexity for every LR(k) grammar and quadratic time complexity for every unambiguous grammar. For general context free grammar, things are more ambiguous(pun intended): while subcubic algorithms exist(although with a large constant), all other general-purpose parsing algorithms(like Earley, GLR, GLL...) are indeed cubic, like ours.

Practically, this crate tries to make the engine be as efficient as possible for grammars used in practice. While many improvements, such as Earley sets compaction and lazy caching, have been made, this is inherently an ongoing process. If you find the engine is a bottleneck in your application, feel free to open an issue.

Dependencies

~8–11MB
~198K SLoC