33 releases

0.13.4 Jan 4, 2024
0.13.3 Sep 21, 2023
0.13.2 Aug 1, 2023
0.13.1 Jan 27, 2023
0.1.1 Dec 18, 2018

#9 in Parser tooling

Download history 950/week @ 2023-11-26 772/week @ 2023-12-03 1028/week @ 2023-12-10 1170/week @ 2023-12-17 917/week @ 2023-12-24 980/week @ 2023-12-31 1221/week @ 2024-01-07 773/week @ 2024-01-14 890/week @ 2024-01-21 784/week @ 2024-01-28 632/week @ 2024-02-04 589/week @ 2024-02-11 847/week @ 2024-02-18 772/week @ 2024-02-25 1128/week @ 2024-03-03 587/week @ 2024-03-10

3,375 downloads per month
Used in 4 crates

Apache-2.0/MIT

475KB
10K SLoC

lrpar

lrpar provides a Yacc-compatible parser (where grammars can be generated at compile-time or run-time). It can take in traditional .y files and convert them into an idiomatic Rust parser.

If you're new to lrpar, please read the "quick start guide". The "grmtools book" and API reference have more detailed information. You can find the appropriate documentation for the version of lrpar you are using here:

Latest release master
Quickstart guide Quickstart guide
grmtools book grmtools book
lrpar API lrpar API

Documentation for all past and present releases

Example

Let's assume we want to statically generate a parser for a simple calculator language (and let's also assume we are able to use lrlex for the lexer). We need to add a build.rs file to our project which statically compiles both the lexer and parser. While we can perform both steps individually, it's easiest to use lrlex which does both jobs for us in one go. Our build.rs file thus looks as follows:

use cfgrammar::yacc::YaccKind;
use lrlex::CTLexerBuilder;

fn main() {
    CTLexerBuilder::new()
        .lrpar_config(|ctp| {
            ctp.yacckind(YaccKind::Grmtools)
                .grammar_in_src_dir("calc.y")
                .unwrap()
        })
        .lexer_in_src_dir("calc.l")
        .unwrap()
        .build()
        .unwrap();
}

where src/calc.l is as follows:

%%
[0-9]+ "INT"
\+ "+"
\* "*"
\( "("
\) ")"
[\t ]+ ;

and src/calc.y is as follows:

%start Expr
%avoid_insert "INT"
%%
Expr -> Result<u64, ()>:
      Expr '+' Term { Ok($1? + $3?) }
    | Term { $1 }
    ;

Term -> Result<u64, ()>:
      Term '*' Factor { Ok($1? * $3?) }
    | Factor { $1 }
    ;

Factor -> Result<u64, ()>:
      '(' Expr ')' { $2 }
    | 'INT'
      {
          let v = $1.map_err(|_| ())?;
          parse_int($lexer.span_str(v.span()))
      }
    ;
%%
// Any functions here are in scope for all the grammar actions above.

fn parse_int(s: &str) -> Result<u64, ()> {
    match s.parse::<u64>() {
        Ok(val) => Ok(val),
        Err(_) => {
            eprintln!("{} cannot be represented as a u64", s);
            Err(())
        }
    }
}

Because we specified that our Yacc file is in Grmtools format, each rule has a separate Rust type to which all its functions conform (in this case, all the rules have the same type, but that's not a requirement).

A simple src/main.rs is as follows:

use std::io::{self, BufRead, Write};

use lrlex::lrlex_mod;
use lrpar::lrpar_mod;

// Using `lrlex_mod!` brings the lexer for `calc.l` into scope.
lrlex_mod!("calc.l");
// Using `lrpar_mod!` brings the parser for `calc.y` into scope.
lrpar_mod!("calc.y");

fn main() {
    // Get the `LexerDef` for the `calc` language.
    let lexerdef = calc_l::lexerdef();
    let stdin = io::stdin();
    loop {
        print!(">>> ");
        io::stdout().flush().ok();
        match stdin.lock().lines().next() {
            Some(Ok(ref l)) => {
                if l.trim().is_empty() {
                    continue;
                }
                // Now we create a lexer with the `lexer` method with which
                // we can lex an input.
                let lexer = lexerdef.lexer(l);
                // Pass the lexer to the parser and lex and parse the input.
                let (res, errs) = calc_y::parse(&lexer);
                for e in errs {
                    println!("{}", e.pp(&lexer, &calc_y::token_epp));
                }
                match res {
                    Some(Ok(r)) => println!("Result: {}", r),
                    _ => eprintln!("Unable to evaluate expression.")
                }
            }
            _ => break
        }
    }
}

We can now cargo run our project and evaluate simple expressions:

>>> 2 + 3
Result: 5
>>> 2 + 3 * 4
Result: 14
>>> (2 + 3) * 4
Result: 20

lrpar also comes with advanced error recovery built-in:

>>> 2 + + 3
Parsing error at line 1 column 5. Repair sequences found:
   1: Delete +
   2: Insert INT
Result: 5
>>> 2 + 3 3
Parsing error at line 1 column 7. Repair sequences found:
   1: Insert *
   2: Insert +
   3: Delete 3
Result: 11
>>> 2 + 3 4 5
Parsing error at line 1 column 7. Repair sequences found:
   1: Insert *, Delete 4
   2: Insert +, Delete 4
   3: Delete 4, Delete 5
   4: Insert +, Shift 4, Delete 5
   5: Insert +, Shift 4, Insert +
   6: Insert *, Shift 4, Delete 5
   7: Insert *, Shift 4, Insert *
   8: Insert *, Shift 4, Insert +
   9: Insert +, Shift 4, Insert *
Result: 17

Dependencies

~4–15MB
~154K SLoC