26 releases (13 stable)

3.4.0 Mar 6, 2024
3.3.0 Jun 19, 2023
3.2.0 Sep 17, 2020
3.1.0 Jan 6, 2020
0.1.0 Dec 30, 2016

#8 in Parser tooling

Download history 10359/week @ 2024-08-22 11792/week @ 2024-08-29 12005/week @ 2024-09-05 10810/week @ 2024-09-12 11732/week @ 2024-09-19 12543/week @ 2024-09-26 11077/week @ 2024-10-03 10578/week @ 2024-10-10 11285/week @ 2024-10-17 11313/week @ 2024-10-24 10743/week @ 2024-10-31 8736/week @ 2024-11-07 9427/week @ 2024-11-14 8924/week @ 2024-11-21 10940/week @ 2024-11-28 8160/week @ 2024-12-05

39,015 downloads per month
Used in 130 crates (28 directly)

MIT license

175KB
1.5K SLoC

pom

Crates.io Build Status Docs Discord

PEG parser combinators created using operator overloading without macros.

Document

What is PEG?

PEG stands for parsing expression grammar, is a type of analytic formal grammar, i.e. it describes a formal language in terms of a set of rules for recognizing strings in the language. Unlike CFGs, PEGs cannot be ambiguous; if a string parses, it has exactly one valid parse tree. Each parsing function conceptually takes an input string as its argument, and yields one of the following results:

  • success, in which the function may optionally move forward or consume one or more characters of the input string supplied to it, or
  • failure, in which case no input is consumed.

Read more on Wikipedia.

What is parser combinator?

A parser combinator is a higher-order function that accepts several parsers as input and returns a new parser as its output. Parser combinators enable a recursive descent parsing strategy that facilitates modular piecewise construction and testing.

Parsers built using combinators are straightforward to construct, readable, modular, well-structured and easily maintainable. With operator overloading, a parser combinator can take the form of an infix operator, used to glue different parsers to form a complete rule. Parser combinators thereby enable parsers to be defined in an embedded style, in code which is similar in structure to the rules of the formal grammar. And the code is easier to debug than macros.

The main advantage is that you don't need to go through any kind of code generation step, you're always using the vanilla language underneath. Aside from build issues (and the usual issues around error messages and debuggability, which in fairness are about as bad with macros as with code generation), it's usually easier to freely intermix grammar expressions and plain code.

List of predefined parsers and combinators

Basic Parsers Description
empty() Always succeeds, consume no input.
end() Match end of input.
any() Match any symbol and return the symbol.
sym(t) Match a single terminal symbol t.
seq(s) Match sequence of symbols.
list(p,s) Match list of p, separated by s.
one_of(set) Success when current input symbol is one of the set.
none_of(set) Success when current input symbol is none of the set.
is_a(predicate) Success when predicate return true on current input symbol.
not_a(predicate) Success when predicate return false on current input symbol.
take(n) Read n symbols.
skip(n) Skip n symbols.
call(pf) Call a parser factory, can be used to create recursive parsers.
Parser Combinators Description
p | q Match p or q, return result of the first success.
p + q Match p and q, if both succeed return a pair of results.
p - q Match p and q, if both succeed return result of p.
p * q Match p and q, if both succeed return result of q.
p >> q Parse p and get result P, then parse q and return result of q(P).
-p Success when p succeeds, doesn't consume input.
!p Success when p fails, doesn't consume input.
p.opt() Make parser optional. Returns an Option.
p.repeat(m..n) p.repeat(0..) repeat p zero or more times
p.repeat(1..) repeat p one or more times
p.repeat(1..4) match p at least 1 and at most 3 times
p.repeat(5) repeat p exactly 5 times
p.map(f) Convert parser result to desired value.
p.convert(f) Convert parser result to desired value, fails in case of conversion error.
p.pos() Get input position after matching p.
p.collect() Collect all matched input symbols.
p.discard() Discard parser output.
p.name(_) Give parser a name to identify parsing errors.
If the trace feature is enabled then a basic trace for the parse and parse result is made to stdout.
p.expect(_) Mark parser as expected, abort early when failed in ordered choice.

The choice of operators is established by their operator precedence, arity and "meaning". Use * to ignore the result of first operand on the start of an expression, + and - can fulfill the need on the rest of the expression.

For example, A * B * C - D + E - F will return the results of C and E as a pair.

Example code

use pom::parser::*;

let input = b"abcde";
let parser = sym(b'a') * none_of(b"AB") - sym(b'c') + seq(b"de");
let output = parser.parse(input);
assert_eq!(output, Ok( (b'b', vec![b'd', b'e'].as_slice()) ) );

Example JSON parser

extern crate pom;
use pom::parser::*;
use pom::Parser;

use std::collections::HashMap;
use std::str::{self, FromStr};

#[derive(Debug, PartialEq)]
pub enum JsonValue {
	Null,
	Bool(bool),
	Str(String),
	Num(f64),
	Array(Vec<JsonValue>),
	Object(HashMap<String,JsonValue>)
}

fn space() -> Parser<u8, ()> {
	one_of(b" \t\r\n").repeat(0..).discard()
}

fn number() -> Parser<u8, f64> {
	let integer = one_of(b"123456789") - one_of(b"0123456789").repeat(0..) | sym(b'0');
	let frac = sym(b'.') + one_of(b"0123456789").repeat(1..);
	let exp = one_of(b"eE") + one_of(b"+-").opt() + one_of(b"0123456789").repeat(1..);
	let number = sym(b'-').opt() + integer + frac.opt() + exp.opt();
	number.collect().convert(str::from_utf8).convert(|s|f64::from_str(&s))
}

fn string() -> Parser<u8, String> {
	let special_char = sym(b'\\') | sym(b'/') | sym(b'"')
		| sym(b'b').map(|_|b'\x08') | sym(b'f').map(|_|b'\x0C')
		| sym(b'n').map(|_|b'\n') | sym(b'r').map(|_|b'\r') | sym(b't').map(|_|b'\t');
	let escape_sequence = sym(b'\\') * special_char;
	let string = sym(b'"') * (none_of(b"\\\"") | escape_sequence).repeat(0..) - sym(b'"');
	string.convert(String::from_utf8)
}

fn array() -> Parser<u8, Vec<JsonValue>> {
	let elems = list(call(value), sym(b',') * space());
	sym(b'[') * space() * elems - sym(b']')
}

fn object() -> Parser<u8, HashMap<String, JsonValue>> {
	let member = string() - space() - sym(b':') - space() + call(value);
	let members = list(member, sym(b',') * space());
	let obj = sym(b'{') * space() * members - sym(b'}');
	obj.map(|members|members.into_iter().collect::<HashMap<_,_>>())
}

fn value() -> Parser<u8, JsonValue> {
	( seq(b"null").map(|_|JsonValue::Null)
	| seq(b"true").map(|_|JsonValue::Bool(true))
	| seq(b"false").map(|_|JsonValue::Bool(false))
	| number().map(|num|JsonValue::Num(num))
	| string().map(|text|JsonValue::Str(text))
	| array().map(|arr|JsonValue::Array(arr))
	| object().map(|obj|JsonValue::Object(obj))
	) - space()
}

pub fn json() -> Parser<u8, JsonValue> {
	space() * value() - end()
}

fn main() {
	let input = br#"
	{
        "Image": {
            "Width":  800,
            "Height": 600,
            "Title":  "View from 15th Floor",
            "Thumbnail": {
                "Url":    "http://www.example.com/image/481989943",
                "Height": 125,
                "Width":  100
            },
            "Animated" : false,
            "IDs": [116, 943, 234, 38793]
        }
    }"#;

	println!("{:?}", json().parse(input));
}

You can run this example with the following command:

cargo run --example json

Benchmark

Parser Time to parse the same JSON file
pom: json_byte 621,319 ns/iter (+/- 20,318)
pom: json_char 627,110 ns/iter (+/- 11,463)
pest: json_char 13,359 ns/iter (+/- 811)

Lifetimes and files

String literals have a static lifetime so they can work with the static version of Parser imported from pom::Parser. Input read from a file has a shorter lifetime. In this case you should import pom::parser::Parser and declare lifetimes on your parser functions. So

fn space() -> Parser<u8, ()> {
    one_of(b" \t\r\n").repeat(0..).discard()
}

would become

fn space<'a>() -> Parser<'a, u8, ()> {
    one_of(b" \t\r\n").repeat(0..).discard()
}

Dependencies

~465–620KB