At least for the people who send me mail about a new language that they're designing, the general advice is: do it to learn about how to write a compiler. Don't have any expectations that anyone will use it, unless you hook up with some sort of organization in a position to push it hard. It's a lottery, and some can buy a lot of the tickets. There are plenty of beautiful languages (more beautiful than C) that didn't catch on. But someone does win the lottery, and doing a language at least teaches you something.
Dennis Ritchie (1941-2011) Creator of the C programming language and of Unix
Copyright (C) 2017 by Juancarlo Añez
Copyright (C) 2012-2016 by Juancarlo Añez and Thomas Bragg
THE ORIGINAL SOURCE OF FUNDING FOR GRAKO DEVELOPMENT HAS ENDED
And my work is moving away from parsing and translation.
If you'd like to contribute to the future development of Grako, please make a donation to the project.
Some of the planned new features are: grammar expressions for left and right associativity, new algorithms for left-recursion, a unified intermediate model for parsing and translating programming languages, and more...
Grako (for grammar compiler) is a tool that takes grammars in a variation of EBNF as input, and outputs memoizing (Packrat) PEG parsers in Python.
Grako can also compile a grammar stored in a string into a grako.grammars.Grammar
object that can be used to parse any given input, much like the re module does with regular expressions.
Grako is different from other PEG parser generators:
- Generated parsers use Python's very efficient exception-handling system to backtrack. Grako generated parsers simply assert what must be parsed. There are no complicated if-then-else sequences for decision making or backtracking. Memoization allows going over the same input sequence several times in linear time.
- Positive and negative lookaheads, and the cut element (with its cleaning of the memoization cache) allow for additional, hand-crafted optimizations at the grammar level.
- Delegation to Python's re module for lexemes allows for (Perl-like) powerful and efficient lexical analysis.
- The use of Python's context managers considerably reduces the size of the generated parsers for code clarity, and enhanced CPU-cache hits.
- Include files, rule inheritance, and rule inclusion give Grako grammars considerable expressive power.
- Automatic generation of Abstract Syntax Trees_ and Object Models, along with Model Walkers and Code Generators make analysis and translation approachable
The parser generator, the run-time support, and the generated parsers have measurably low Cyclomatic complexity. At around 5 KLOC of Python, it is possible to study all its source code in a single session.
The only dependencies are on the Python standard library, yet the regex library will be used if installed, and colorama will be used on trace output if available. pygraphviz is required for generating diagrams.
Grako is feature-complete and currently being used with complex grammars to parse, analyze, and translate hundreds of thousands of lines of input text, including source code in several programming languages.
[TOC]
Grako was created to address some recurring problems encountered over decades of working with parser generation tools:
- Some programming languages allow the use of keywords as identifiers, or have different meanings for symbols depending on context (Ruby). A parser needs control of lexical analysis to be able to handle those languages.
- LL and LR grammars become contaminated with myriads of lookahead statements to deal with ambiguous constructs in the source language. PEG parsers address ambiguity from the onset.
- Separating the grammar from the code that implements the semantics, and using a variation of a well-known grammar syntax (EBNF) allows for full declarative power in language descriptions. General-purpose programming languages are not up to the task.
- Semantic actions do not belong in a grammar. They create yet another programming language to deal with when doing parsing and translation: the source language, the grammar language, the semantics language, the generated parser's language, and the translation's target language. Most grammar parsers do not check the syntax of embedded semantic actions, so errors get reported at awkward moments, and against the generated code, not against the grammar.
- Preprocessing (like dealing with includes, fixed column formats, or structure-through-indentation) belongs in well-designed program code; not in the grammar.
- It is easy to recruit help with knowledge about a mainstream programming language like Python, but help is hard to find for working with complex grammar-description languages. Grako grammars are in the spirit of a Translators and Interpreters 101 course (if something is hard to explain to a college student, it's probably too complicated, or not well understood).
- Generated parsers should be easy to read and debug by humans. Looking at the generated source code is sometimes the only way to find problems in a grammar, the semantic actions, or in the parser generator itself. It's inconvenient to trust generated code that one cannot understand.
- Python is a great language for working with language parsing and translation.
A Grako generated parser consists of the following classes:
- A
MyLanguageBuffer
class derived fromgrako.buffering.Buffer
that handles the grammar definitions for whitespace, comments, and case significance. - A
MyLanguageParser
class derived fromgrako.parsing.Parser
which uses aMyLanguageBuffer
for traversing input text, and implements the parser using one method for each grammar rule:
def _somerulename_(self):
...
- A
MyLanguageSemantics
class with one semantic method per grammar rule. Each method receives as its single parameter the Abstract Syntax Tree (AST) built from the rule invocation:
def somerulename(self, ast):
return ast
- A
if __name__ == '__main__':
definition, so the generated parser can be executed as a Python script.
The methods in the delegate class return the same AST received as parameter, but custom semantic classes can override the methods to have them return anything (for example, a Semantic Graph). The semantics class can be used as a template for the final semantics implementation, which can omit methods for the rules that do not need semantic treatment.
If present, a _default()
method will be called in the semantics class
when no method matched the rule name:
def _default(self, ast):
...
return ast
If present, a _postproc()
method will be called in the semantics class
after each rule (including the semantics) is processed. This method will
receive the current parsing context as parameter:
def _postproc(self, context, ast):
...
Grako can be used as a library, much like Python's re
, by embedding grammars as strings and generating grammar models instead of generating code.
grako.compile(grammar, name=None, **kwargs)
Compiles the grammar and generates a model that can subsequently be used for parsing input with.
grako.parse(grammar, input, **kwargs)
Compiles the grammar and parses the given input producing an AST as result. The result is equivalent to calling
model = compile(grammar); model.parse(input)
. Compiled grammars are cached for efficiency.
grako.to_python_sourcecode(grammar, name=None, filename=None, **kwargs)
Compiles the grammar to the Python sourcecode that implements the parser.
This is an example of how to use Grako as a library:
GRAMMAR = '''
@@grammar::Calc
start = expression $ ;
expression
=
| term '+' ~ expression
| term '-' ~ expression
| term
;
term
=
| factor '*' ~ term
| factor '/' ~ term
| factor
;
factor
=
| '(' ~ @:expression ')'
| number
;
number = /\d+/ ;
'''
def main():
import pprint
import json
from grako import parse
from grako.util import asjson
ast = parse(GRAMMAR, '3 + 5 * ( 10 - 20 )')
print('PPRINT')
pprint.pprint(ast, indent=2, width=20)
print()
json_ast = asjson(ast)
print('JSON')
print(json.dumps(json_ast, indent=2))
print()
if __name__ == '__main__':
main()
And this is the output:
PPRINT
[ '3',
'+',
[ '5',
'*',
[ '10',
'-',
'20']]]
JSON
[
"3",
"+",
[
"5",
"*",
[
"10",
"-",
"20"
]
]
]
Grako can be run from the command line:
$ python -m grako
Or:
$ scripts/grako
Or just:
$ grako
if Grako was installed using easy_install or pip.
The -h and --help parameters provide full usage information:
$ python -m grako -h
usage: grako [--generate-parser | --draw | --object-model | --pretty]
[--color] [--trace] [--no-left-recursion] [--name NAME]
[--no-nameguard] [--outfile FILE] [--object-model-outfile FILE]
[--whitespace CHARACTERS] [--help] [--version]
GRAMMAR
Grako (for "grammar compiler") takes a grammar in a variation of EBNF as
input, and outputs a memoizing PEG/Packrat parser in Python.
positional arguments:
GRAMMAR the filename of the Grako grammar to parse
optional arguments:
--generate-parser generate parser code from the grammar (default)
--draw, -d generate a diagram of the grammar (requires --outfile)
--object-model, -g generate object model from the class names given as
rule arguments
--pretty, -p generate a prettified version of the input grammar
parse-time options:
--color, -c use color in traces (requires the colorama library)
--trace, -t produce verbose parsing output
generation options:
--no-left-recursion, -l
turns left-recusion support off
--name NAME, -m NAME Name for the grammar (defaults to GRAMMAR base name)
--no-nameguard, -n allow tokens that are prefixes of others
--outfile FILE, --output FILE, -o FILE
output file (default is stdout)
--object-model-outfile FILE, -G FILE
generate object model and save to FILE
--whitespace CHARACTERS, -w CHARACTERS
characters to skip during parsing (use "" to disable)
common options:
--help, -h show this help message and exit
--version, -v provide version information and exit
$
To use the generated parser, just subclass the base or the abstract
parser, create an instance of it, and invoke its parse()
method
passing the grammar to parse and the starting rule's name as parameter:
from myparser import MyParser
parser = MyParser()
ast = parser.parse('text to parse', rule_name='start')
print(ast)
print(json.dumps(ast, indent=2)) # ASTs are JSON-friendy
The generated parsers' constructors accept named arguments to specify whitespace characters, the regular expression for comments, case sensitivity, verbosity, and more (see below).
To add semantic actions, just pass a semantic delegate to the parse method:
model = parser.parse(text, rule_name='start', semantics=MySemantics())
If special lexical treatment is required (as in 80 column languages),
then a descendant of grako.buffering.Buffer
can be passed instead of
the text:
class MySpecialBuffer(MyLanguageBuffer):
...
buf = MySpecialBuffer(text)
model = parser.parse(buf, rule_name='start', semantics=MySemantics())
The generated parser's module can also be invoked as a script:
$ python myparser.py inputfile startrule
As a script, the generated parser's module accepts several options:
$ python myparser.py -h
usage: myparser.py [-h] [-c] [-l] [-n] [-t] [-w WHITESPACE] FILE [STARTRULE]
Simple parser for DBD.
positional arguments:
FILE the input file to parse
STARTRULE the start rule for parsing
optional arguments:
-h, --help show this help message and exit
-c, --color use color in traces (requires the colorama library)
-l, --list list all rules and exit
-n, --no-nameguard disable the 'nameguard' feature
-t, --trace output trace information
-w WHITESPACE, --whitespace WHITESPACE
whitespace specification
Grako uses a variant of the standard EBNF syntax. Syntax
definitions for VIM and for Sublime Text can be found under the
etc/vim
and etc/sublime
directories in the source code distribution.
A grammar consists of a sequence of one or more rules of the form:
name = <expre> ;
If a name collides with a Python keyword, an underscore (_
) will
be appended to it on the generated parser.
Rule names that start with an uppercase character:
FRAGMENT = /[a-z]+/ ;
do not advance over whitespace before beginning to parse. This feature becomes handy when defining complex lexical elements, as it allows breaking them into several rules.
The parser returns an AST value for each rule depending on what was parsed:
- A single value
- A list of AST
- A dict-like object for rules with named elements
- An object, when ModelBuilderSemantics is used
- None
See the Abstract Syntax Trees and Building Models sections for more details.
The expressions, in reverse order of operator precedence, can be:
: Choice. Match either e1
or e2
.
A `|` be be used before the first option if desired:
choices
=
| e1
| e2
| e3
;
: Sequence. Match e1
and then match e2
.
: Grouping. Match e
. For example: ('a' | 'b')
.
: Optionally match e
.
: Closure. Match e
zero or more times. Note that the
AST returned for a closure is always
a list.
: Positive closure. Match e
one or more times. The AST is always a list.
: Empty closure. Match nothing and produce an empty list as AST.
: The cut expression. Commit to the current option and prevent other options from being considered even if what follows fails to parse.
In this example, other options won't be considered if a
parenthesis is parsed:
atom
=
'(' ~ @:expre ')'
| int
| bool
;
: Positive join. Inspired by Python's str.join()
, it parses the same as
this expression:
e {s ~ e}
yet the result is a single list of the form:
[e, s, e, s, e....]
Use grouping if `s` is more complex than a *token* or a *pattern*:
(s t)%{ e }+
: Join. Parses the list of s
-separated expressions, or the empty closure.
It is equivalent to:
s%{e}+|{}
: Left join. Like the join expression, but the result is a left-associative tree built with tuple()
, in wich the first elelemnt is the separator (op
), and the other two elements are the operands.
The expression:
'+'<{/\d+/}+
Will parse this input:
1 + 2 + 3 + 4
To this tree:
(
'+',
(
'+',
(
'+',
'1',
'2'
),
'3'
),
'4'
)
: Right join. Like the join expression, but the result is a right-associative tree built with tuple()
, in wich the first elelemnt is the separator (op
), and the other two elements are the operands.
The expression:
'+'>{/\d+/}+
Will parse this input:
1 + 2 + 3 + 4
To this tree:
(
'+',
'1',
(
'+',
'2',
(
'+',
'3',
'4'
)
)
)
: Positive gather. Like positive join, but the separator is not included in the resulting AST.
: Gather. Like the join, but the separator is not included in the resulting AST.
It is equivalent to:
s.{e}+|{}
: Positive lookahead. Succeed if e
can be parsed, but do not
consume any input.
: Negative lookahead. Fail if e
can be parsed, and do not consume
any input.
: Match the token text within the quotation marks.
Note that if *text* is alphanumeric, then **Grako** will check
that the character following the token is not alphanumeric. This
is done to prevent tokens like *IN* matching when the text ahead
is *INITIALIZE*. This feature can be turned off by passing
`nameguard=False` to the `Parser` or the `Buffer`, or by using a
pattern expression (see below) instead of a token expression.
Alternatively, the `@@nameguard` or `@@namechars` directives may
be specified in the grammar:
@@nameguard :: False
or to specify additional characters that should also be considered
part of names:
@@namechars :: '$-.'
: Match the token text within the quotation marks, interpreting text like Python's raw string literals.
: The pattern expression. Match the Python regular expression
regexp
at the current text position. Unlike other expressions,
this one does not advance over whitespace or comments. For that,
place the regexp
as the only term in its own rule.
The *regex* is interpreted as a [Python]'s [raw string literal] and
passed *as-is* to the [Python][] [re] module (or to
[regex], if available), using `match()` at the current position in
the text. The matched text is the [AST][Abstract Syntax Tree] for
the expression.
Consecutive patterns are concatenated to form a single one.
/regexp/
: Another form of the pattern expression.
+/regexp/
: Concatenate the given pattern with the preceding one.
: Match nothing, but behave as if constant
had been parsed.
Constants can be used to inject elements into the concrete and
abstract syntax trees, perhaps avoiding having to write a
semantic action. For example:
boolean_option = name ['=' (boolean|`true`) ] ;
: Invoke the rule named rulename
. To help with lexical aspects of
grammars, rules with names that begin with an uppercase letter
will not advance the input over whitespace or comments.
: The include operator. Include the right hand side of rule
rulename
at this point.
The following set of declarations:
includable = exp1 ;
expanded = exp0 >includable exp2 ;
Has the same effect as defining *expanded* as:
expanded = exp0 exp1 exp2 ;
Note that the included rule must be defined before the rule that
includes it.
: The empty expression. Succeed without advancing over input. Its
value is None
.
: The fail expression. This is actually !
applied to ()
, which
always fails.
: Add the result of e
to the AST using
name
as key. If name
collides with any attribute or method of
dict
, or is a Python keyword, an underscore (_
) will be
appended to the name.
: Add the result of e
to the AST using
name
as key. Force the entry to be a list even if only one
element is added. Collisions with dict
attributes or Python
keywords are resolved by appending an underscore to name
.
: The override operator. Make the AST for
the complete rule be the AST for e
.
The override operator is useful to recover only part of the right
hand side of a rule without the need to name it, or add a
semantic action.
This is a typical use of the override operator:
subexp = '(' @:expre ')' ;
The [AST][Abstract Syntax Tree] returned for the `subexp` rule
will be the [AST][Abstract Syntax Tree] recovered from invoking
`expre`.
: Like @:e
, but make the AST always be
a list.
This operator is convenient in cases such as:
arglist = '(' @+:arg {',' @+:arg}* ')' ;
In which the delimiting tokens are of no interest.
: The end of text symbol. Verify that the end of the input text has been reached.
: Python-style comments are also allowed.
When there are no named items in a rule, the AST consists of the elements parsed by the rule, either a single item or a list. This default behavior makes it easier to write simple rules:
number = /[0-9]+/ ;
Without having to write:
number = number:/[0-9]+/ ;
When a rule has named elements, the unnamed ones are excluded from the AST (they are ignored).
The following expressions are still recognized in grammars, but they are considered deprecated, and will be removed in a future version of Grako.
?/regexp/?
: Another form of the pattern expression that can be used when there
are slashes (/
) in the pattern. Use the ?"regexp"
or ?'regexp'
forms instead.
(*
comment*)
: Comments may appear anywhere in the text. Use the Python-style comments instead.
Grako allows rules to specify Python-style arguments:
addition(Add, op='+')
=
addend '+' addend
;
The arguments values are fixed at grammar-compilation time.
An alternative syntax is available if no keyword parameters are required:
addition::Add, '+'
=
addend '+' addend
;
Semantic methods must be ready to receive any arguments declared in the corresponding rule:
def addition(self, ast, name, op=None):
...
When working with rule arguments, it is good to define a _default()
method that is ready to take any combination of standard and keyword
arguments:
def _default(self, ast, *args, **kwargs):
...
Rules may extend previously defined rules using the <
operator. The
base rule must be defined previously in the grammar.
The following set of declarations:
base::Param = exp1 ;
extended < base = exp2 ;
Has the same effect as defining extended as:
extended::Param = exp1 exp2 ;
Parameters from the base rule are copied to the new rule if the new rule doesn't define its own. Repeated inheritance should be possible, but it hasn't been tested.
A grammar rule may be redefined by using the @override
decorator:
start = ab $;
ab = 'xyz' ;
@override
ab = @:'a' {@:'b'} ;
When combined with the #include
directive, rule overrides can be used
to create a modified grammar without altering the original.
By default, and AST is either a list (for
closures and rules without named elements), or dict-derived object
that contains one item for every named element in the grammar rule.
Items can be accessed through the standard dict
syntax (ast['key']
),
or as attributes (ast.key
).
AST entries are single values if only one item
was associated with a name, or lists if more than one item was matched.
There's a provision in the grammar syntax (the +:
operator) to force
an AST entry to be a list even if only one
element was matched. The value for named elements that were not found
during the parse (perhaps because they are optional) is None
.
When the parseinfo=True
keyword argument has been passed to the
Parser
constructor, a parseinfo
element is added to AST nodes that are dict-like. The element contains a
collections.namedtuple
with the parse information for the node:
ParseInfo = namedtuple(
'ParseInfo',
[
'buffer',
'rule',
'pos',
'endpos',
'line',
'endline',
]
)
With the help of the Buffer.line_info()
method, it is possible to
recover the line, column, and original text parsed for the node. Note
that when ParseInfo
is generated, the Buffer
used during parsing is
kept in memory for the lifetime of the AST.
Generation of parseinfo
can also be controlled using the
@@parseinfo :: True
grammar directive.
The prefix to be used in classes generated by Grako can be passed to
the command-line tool using the -m
option:
$ grako -m MyLanguage mygrammar.ebnf
will generate:
class MyLanguageParser(Parser):
...
The name can also be specified within the grammar using the @@grammar
directive:
@@grammar :: MyLanguage
By default, Grako generated parsers skip the usual whitespace
characters with the regular expression r'\s+'
using the re.UNICODE
flag (or with the Pattern_White_Space
property if the regex module
is available), but you can change that behavior by passing a
whitespace
parameter to your parser.
For example, the following will skip over tab (\t
) and space
characters, but not so with other typical whitespace characters such as
newline (\n
):
parser = MyParser(text, whitespace='\t ')
The character string is converted into a regular expression character set before starting to parse.
You can also provide a regular expression directly instead of a string. The following is equivalent to the above example:
parser = MyParser(text, whitespace=re.compile(r'[\t ]+'))
Note that the regular expression must be pre-compiled to let Grako distinguish it from plain string.
If you do not define any whitespace characters, then you will have to handle whitespace in your grammar rules (as it's often done in PEG parsers):
parser = MyParser(text, whitespace='')
Whitespace may also be specified within the grammar using the
@@whitespace
directive, although any of the above methods will
overwrite the setting in the grammar:
@@whitespace :: /[\t ]+/
If the source language is case insensitive, it can be specified in the
parser by using the ignorecase
parameter:
parser = MyParser(text, ignorecase=True)
You may also specify case insensitivity within the grammar using the
@@ignorecase
directive:
@@ignorecase :: True
The change will affect both token and pattern matching.
Parsers will skip over comments specified as a regular expression using
the comments_re
parameter:
parser = MyParser(text, comments_re="\(\*.*?\*\)")
For more complex comment handling, you can override the
Buffer.eat_comments()
method.
For flexibility, it is possible to specify a pattern for end-of-line comments separately:
parser = MyParser(
text,
comments_re="\(\*.*?\*\)",
eol_comments_re="#.*?$"
)
Both patterns may also be specified within a grammar using the
@@comments
and @@eol_comments
directives:
@@comments :: /\(\*.*?\*\)/
@@eol_comments :: /#.*?$/
Some languages must reserve the use of certain tokens as valid identifiers because the tokens are used to mark particular constructs in the language. Those reserved tokens are known as Reserved Words or Keywords
Grako provides support for preventing the use of keywords as identifiers though the @@ keyword
directive,and the @ name
decorator.
A grammar may specify reserved tokens providing a list of them in one or
more @@ keyword
directives:
@@keyword :: if endif
@@keyword :: else elseif
The @ name
decorator checks that the result of a grammar rule does not
match a token defined as a keyword:
@name
identifier = /(?!\d)\w+/ ;
There are situations in which a token is reserved only in a very specific context. In those cases, a negative lookahead will prevent the use of the token:
statements = {!'END' statement}+ ;
There are no constructs for semantic actions in Grako grammars. This is on purpose, because semantic actions obscure the declarative nature of grammars and provide for poor modularization from the parser-execution perspective.
Semantic actions are defined in a class, and applied by passing an
object of the class to the parse()
method of the parser as the
semantics=
parameter. Grako will invoke the method that matches
the name of the grammar rule every time the rule parses. The argument to
the method will be the AST constructed from the
right-hand-side of the rule:
class MySemantics(object):
def some_rule_name(self, ast):
return ''.join(ast)
def _default(self, ast):
pass
If there's no method matching the rule's name, Grako will try to
invoke a _default()
method if it's defined:
def _default(self, ast):
...
Nothing will happen if neither the per-rule method nor _default()
are
defined.
The per-rule methods in classes implementing the semantics provide enough opportunity to do rule post-processing operations, like verifications (for inadequate use of keywords as identifiers), or AST transformation:
class MyLanguageSemantics(object):
def identifier(self, ast):
if my_lange_module.is_keyword(ast):
raise FailedSemantics('"%s" is a keyword' % str(ast))
return ast
For finer-grained control it is enough to declare more rules, as the impact on the parsing times will be minimal.
If preprocessing is required at some point, it is enough to place invocations of empty rules where appropriate:
myrule = first_part preproc {second_part} ;
preproc = () ;
The abstract parser will honor as a semantic action a method declared as:
def preproc(self, ast):
...
Grako grammars support file inclusion through the include directive:
#include :: "filename"
The resolution of the filename is relative to the directory/folder of
the source. Absolute paths and ../
navigations are honored.
The functionality required for implementing includes is available to all
Grako-generated parsers through the Buffer
class; see the
EBNFBuffer
class in the grako.parser
module for an example.
Naming elements in grammar rules makes the parser discard uninteresting parts of the input, like punctuation, to produce an Abstract Syntax Tree (AST) that reflects the semantic structure of what was parsed. But an AST doesn't carry information about the rule that generated it, so navigating the trees may be difficult.
Grako defines the grako.model.ModelBuilderSemantics
semantics
class which helps construct object models from abtract syntax trees:
from grako.model import ModelBuilderSemantics
parser = MyParser(semantics=ModelBuilderSemantics())
Then you add the desired node type as first parameter to each grammar rule:
addition::AddOperator = left:mulexpre '+' right:addition ;
ModelBuilderSemantics
will synthesize a class AddOperator(Node):
class and
use it to construct the node. The synthesized class will have one
attribute with the same name as the named elements in the rule.
You can also use Python's built-in types as node types, and
ModelBuilderSemantics
will do the right thing:
integer::int = /[0-9]+/ ;
ModelBuilderSemantics
acts as any other semantics class, so its
default behavior can be overidden by defining a method to handle the
result of any particular grammar rule.
The class grako.model.NodeWalker
allows for the easy traversal
(walk) a model constructed with a ModelBuilderSemantics
instance:
from grako.model import NodeWalker
class MyNodeWalker(NodeWalker):
def walk_AddOperator(self, node):
left = self.walk(node.left)
right = self.walk(node.right)
print('ADDED', left, right)
model = MyParser(semantics=ModelBuilderSemantics()).parse(input)
walker = MyNodeWalker()
walker.walk(model)
When a method with a name like walk_AddOperator()
is defined, it will
be called when a node of that type is walked (the pythonic version
of the class name may also be used for the walk method:
walk_add_operator()
.
If a walk method for a node class is not found, then a method for the class's bases is searched, so it is possible to write catch-all methods such as:
def walk_Node(self, node):
print('Reached Node', node)
def walk_str(self, s):
return s
def walk_object(self, o):
raise Exception('Unexpected tyle %s walked', type(o).__name__)
Predeclared classes can be passed to ModelBuilderSemantics
instances
through the types=
parameter:
from mymodel import AddOperator, MulOperator
semantics=ModelBuilderSemantics(types=[AddOperator, MulOperator])
ModelBuilderSemantics
assumes nothing about types=
, so any
constructor (a function, or a partial function) can be used.
It is possible to specify a a base class for generated model nodes:
additive
=
| addition
| substraction
;
addition::AddOperator::Operator
=
left:mulexpre op:'+' right:additive
;
substraction::SubstractOperator::Operator
=
left:mulexpre op:'-' right:additive
;
Grako will generate the base class if it's not already known.
Base classes can be used as the target class in walkers, and in code generators:
class MyNodeWalker(NodeWalker):
def walk_Operator(self, node):
left = self.walk(node.left)
right = self.walk(node.right)
op = self.walk(node.op)
print(type(node).__name__, op, left, right)
class Operator(ModelRenderer):
template = '{left} {op} {right}'
note
: As of Grako 3.2.0, code generation is separated from grammar
models through grako.codegen.CodeGenerator
as to allow for code
generation targets different from Python. Still, the use of inline
templates and rendering.Renderer
hasn't changed. See the regex
example for merged modeling and code generation.
Grako doesn't impose a way to create translators with it, but it exposes the facilities it uses to generate the Python source code for parsers.
Translation in Grako is template-based, but instead of defining or
using a complex templating engine (yet another language), it relies on
the simple but powerful string.Formatter
of the Python standard
library. The templates are simple strings that, in Grako's style,
are inlined with the code.
To generate a parser, Grako constructs an object model of the parsed
grammar. A grako.codegen.CodeGenerator
instance matches model objects
to classes that descend from grako.codegen.ModelRenderer
and implement
the translation and rendering using string templates. Templates are
left-trimmed on whitespace, like Python doc-comments are. This is an
example taken from Grako's source code:
class Lookahead(ModelRenderer):
template = '''\
with self._if():
{exp:1::}\
'''
Every attribute of the object that doesn't start with an underscore
(_
) may be used as a template field, and fields can be added or
modified by overriding the render_fields(fields)
method. Fields
themselves are lazily rendered before being expanded by the template,
so a field may be an instance of a ModelRenderer
descendant.
The rendering
module defines a Formatter
enhanced to support the
rendering of items in an iterable one by one. The syntax to achieve
that is:
'''
{fieldname:ind:sep:fmt}
'''
All of ind
, sep
, and fmt
are optional, but the three colons are
not. A field specified that way will be rendered using:
indent(sep.join(fmt % render(v) for v in value), ind)
The extended format can also be used with non-iterables, in which case the rendering will be:
indent(fmt % render(value), ind)
The default multiplier for ind
is 4
, but that can be overridden
using n*m
(for example 3*1
) in the format.
note
: Using a newline character (\n
) as separator will interfere with
left trimming and indentation of templates. To use a newline as
separator, specify it as \\n
, and the renderer will understand
the intention.
Grako provides experimental support for left recursion in PEG grammars. The implementation of left recursion is ongoing; it does not yet handle all cases. The algorithm used is Warth et al's.
Sometimes, while debugging a grammar, it is useful to turn left-recursion support on or off:
parser = MyParser(
text,
left_recursion=True,
)
Left recursion can also be turned off from within the grammar using the
@@left_recursion
directive:
@@left_recursion :: False
The file etc/grako.ebnf
contains a grammar for the Grako grammar
language written in its own grammar language. It is used in the
bootstrap test suite to prove that Grako can generate a parser to
parse its own language, and the resulting parser is made the bootstrap
parser every time Grako is stable (see grako/bootstrap.py
for the
generated parser).
Grako uses Grako to translate grammars into parsers, so it is a good example of end-to-end translation.
The project examples/regexp
contains a regexp-to-EBNF translator and
parser generator. The project has no practical use, but it's a complete,
end-to-end example of how to implement a translator using Grako.
The project examples/calc
implements a calculator for simple expressions, and is written as a tutorial over most of the features provided by Grako.
The project examples/antlr2grako
contains a ANTLR to Grako
grammar translator. The project is a good example of the use of models
and templates in translation. The program, antlr2grako.py
generates
the Grako grammar on standard output, but because the model used is
Grako's own, the same code can be used to directly generate a parser
from an ANTLR grammar. Please take a look at the examples README to
know about limitations.
- Christian Ledermann wrote parsewkt a parser for Well-known text (WTK) using Grako.
- Marcus Brinkmann (lambdafu) wrote smc.mw, a parser for a MediaWiki-style language.
- Marcus Brinkmann (lambdafu) is working on a C++ code generator for Grako called Grako++. Help in the form of testing, test cases, and pull requests is welcome.
You may use Grako under the terms of the BSD-style license described in the enclosed LICENSE.txt file. If your project requires different licensing please email.
For general Q&A, please use the [grako]
tag on StackOverflow.
To discuss Grako and to receive notifications about new releases, please join the low-volume Grako Forum at Google Groups.
You can also follow the latest Grako developments with @GrakoPEG on Twitter.
The following must be mentioned as contributors of thoughts, ideas, code, and funding to the Grako project:
- Niklaus Wirth was the chief designer of the programming languages Euler, Algol W, Pascal, Modula, Modula-2, Oberon, and Oberon-2. In the last chapter of his 1976 book Algorithms + Data Structures = Programs, Wirth creates a top-down, descent parser with recovery for the Pascal-like, LL(1) programming language PL/0. The structure of the program is that of a PEG parser, though the concept of PEG wasn't formalized until 2004.
- Bryan Ford introduced PEG (parsing expression grammars) in 2004.
- Other parser generators like PEG.js by David Majda inspired the work in Grako.
- William Thompson inspired the use of context managers with his blog post that I knew about through the invaluable Python Weekly newsletter, curated by Rahul Chaudhary
- Jeff Knupp explains why Grako's use of exceptions is sound, so I don't have to.
- Terence Parr created ANTLR, probably the most solid and professional parser generator out there. Ter, ANTLR, and the folks on the ANLTR forums helped me shape my ideas about Grako.
- JavaCC (originally Jack) looks like an abandoned project. It was the first parser generator I used while teaching.
- Grako is very fast. But dealing with millions of lines of legacy source code in a matter of minutes would be impossible without PyPy, the work of Armin Rigo and the PyPy team.
- Guido van Rossum created and has lead the development of the Python programming environment for over a decade. A tool like Grako, at under six thousand lines of code, would not have been possible without Python.
- Kota Mizushima welcomed me to the CSAIL at MIT PEG and Packrat parsing mailing list, and immediately offered ideas and pointed me to documentation about the implementation of cut in modern parsers. The optimization of memoization information in Grako is thanks to one of his papers.
- My students at UCAB inspired me to think about how grammar-based parser generation could be made more approachable.
- Gustavo Lau was my professor of Language Theory at USB, and he was kind enough to be my tutor in a thesis project on programming languages that was more than I could chew. My peers, and then teaching advisers Alberto Torres, and Enzo Chiariotti formed a team with Gustavo to challenge us with programming languages like LATORTA and term exams that went well into the eight hours. And, of course, there was also the pirate patch that should be worn on the left or right eye depending on the LL or LR challenge.
- Manuel Rey led me through another, unfinished, thesis project that taught me about what languages (spoken languages in general, and programming languages in particular) are about. I learned why languages use declensions, and why, although the underlying words are in English, the structure of the programs we write is more like Japanese.
- Marcus Brinkmann has kindly submitted patches that have resolved obscure bugs in Grako's implementation, and that have made the tool more user-friendly, specially for newcomers to parsing and translation.
- Robert Speer cleaned up the nonsense in trying to have Unicode handling be compatible with 2.7.x and 3.x, and figured out the canonical way of honoring escape sequences in grammar tokens without throwing off the encoding.
- Basel Shishani has been an incredibly throrough peer-reviewer of Grako.
- Paul Sargent implemented Warth et al's algorithm for supporting direct and indirect left recursion in PEG parsers.
- Kathryn Long proposed better support for UNICODE in the treatment of whitespace and regular expressions (patterns) in general. Her other contributions have made Grako more congruent, and more user-friendly.
- David Röthlisberger provided the definitive patch that allows the use of Python keywords as rule names.
The following, among others, have contributted to Grako with features, bug fixes, or suggestions:
basel-shishani , drothlis , franz_g , gapag , gegenschall , gkimbar , jimon , lambdafu , leewz , linkdd , nehz , neumond , pauls , pgebhard , r_speer , siemer , sjbrownBitbucket , starkat , tonico_strasser , vinay.sajip , vmuriart
See the CHANGELOG for details.