37 releases

0.13.8 Nov 7, 2024
0.13.7 Jun 14, 2024
0.13.6 May 30, 2024
0.13.4 Jan 4, 2024
0.1.1 Dec 18, 2018

#158 in Parser tooling

Download history 1824/week @ 2024-08-17 1926/week @ 2024-08-24 1452/week @ 2024-08-31 1211/week @ 2024-09-07 1278/week @ 2024-09-14 1478/week @ 2024-09-21 1459/week @ 2024-09-28 1353/week @ 2024-10-05 2081/week @ 2024-10-12 1929/week @ 2024-10-19 1826/week @ 2024-10-26 1983/week @ 2024-11-02 1874/week @ 2024-11-09 2191/week @ 2024-11-16 1572/week @ 2024-11-23 2918/week @ 2024-11-30

8,853 downloads per month
Used in 11 crates (9 directly)

Apache-2.0/MIT

475KB
10K SLoC

lrpar

lrpar provides a Yacc-compatible parser (where grammars can be generated at compile-time or run-time). It can take in traditional .y files and convert them into an idiomatic Rust parser.

If you're new to lrpar, please read the "quick start guide". The "grmtools book" and API reference have more detailed information. You can find the appropriate documentation for the version of lrpar you are using here:

Latest release master
Quickstart guide Quickstart guide
grmtools book grmtools book
lrpar API lrpar API

Documentation for all past and present releases

Example

Let's assume we want to statically generate a parser for a simple calculator language (and let's also assume we are able to use lrlex for the lexer). We need to add a build.rs file to our project which statically compiles both the lexer and parser. While we can perform both steps individually, it's easiest to use lrlex which does both jobs for us in one go. Our build.rs file thus looks as follows:

use cfgrammar::yacc::YaccKind;
use lrlex::CTLexerBuilder;

fn main() {
    CTLexerBuilder::new()
        .lrpar_config(|ctp| {
            ctp.yacckind(YaccKind::Grmtools)
                .grammar_in_src_dir("calc.y")
                .unwrap()
        })
        .lexer_in_src_dir("calc.l")
        .unwrap()
        .build()
        .unwrap();
}

where src/calc.l is as follows:

%%
[0-9]+ "INT"
\+ "+"
\* "*"
\( "("
\) ")"
[\t ]+ ;

and src/calc.y is as follows:

%start Expr
%avoid_insert "INT"
%%
Expr -> Result<u64, ()>:
      Expr '+' Term { Ok($1? + $3?) }
    | Term { $1 }
    ;

Term -> Result<u64, ()>:
      Term '*' Factor { Ok($1? * $3?) }
    | Factor { $1 }
    ;

Factor -> Result<u64, ()>:
      '(' Expr ')' { $2 }
    | 'INT'
      {
          let v = $1.map_err(|_| ())?;
          parse_int($lexer.span_str(v.span()))
      }
    ;
%%
// Any functions here are in scope for all the grammar actions above.

fn parse_int(s: &str) -> Result<u64, ()> {
    match s.parse::<u64>() {
        Ok(val) => Ok(val),
        Err(_) => {
            eprintln!("{} cannot be represented as a u64", s);
            Err(())
        }
    }
}

Because we specified that our Yacc file is in Grmtools format, each rule has a separate Rust type to which all its functions conform (in this case, all the rules have the same type, but that's not a requirement).

A simple src/main.rs is as follows:

use std::io::{self, BufRead, Write};

use lrlex::lrlex_mod;
use lrpar::lrpar_mod;

// Using `lrlex_mod!` brings the lexer for `calc.l` into scope.
lrlex_mod!("calc.l");
// Using `lrpar_mod!` brings the parser for `calc.y` into scope.
lrpar_mod!("calc.y");

fn main() {
    // Get the `LexerDef` for the `calc` language.
    let lexerdef = calc_l::lexerdef();
    let stdin = io::stdin();
    loop {
        print!(">>> ");
        io::stdout().flush().ok();
        match stdin.lock().lines().next() {
            Some(Ok(ref l)) => {
                if l.trim().is_empty() {
                    continue;
                }
                // Now we create a lexer with the `lexer` method with which
                // we can lex an input.
                let lexer = lexerdef.lexer(l);
                // Pass the lexer to the parser and lex and parse the input.
                let (res, errs) = calc_y::parse(&lexer);
                for e in errs {
                    println!("{}", e.pp(&lexer, &calc_y::token_epp));
                }
                match res {
                    Some(Ok(r)) => println!("Result: {}", r),
                    _ => eprintln!("Unable to evaluate expression.")
                }
            }
            _ => break
        }
    }
}

We can now cargo run our project and evaluate simple expressions:

>>> 2 + 3
Result: 5
>>> 2 + 3 * 4
Result: 14
>>> (2 + 3) * 4
Result: 20

lrpar also comes with advanced error recovery built-in:

>>> 2 + + 3
Parsing error at line 1 column 5. Repair sequences found:
   1: Delete +
   2: Insert INT
Result: 5
>>> 2 + 3 3
Parsing error at line 1 column 7. Repair sequences found:
   1: Insert *
   2: Insert +
   3: Delete 3
Result: 11
>>> 2 + 3 4 5
Parsing error at line 1 column 7. Repair sequences found:
   1: Insert *, Delete 4
   2: Insert +, Delete 4
   3: Delete 4, Delete 5
   4: Insert +, Shift 4, Delete 5
   5: Insert +, Shift 4, Insert +
   6: Insert *, Shift 4, Delete 5
   7: Insert *, Shift 4, Insert *
   8: Insert *, Shift 4, Insert +
   9: Insert +, Shift 4, Insert *
Result: 17

Dependencies

~4–12MB
~139K SLoC