PEP 269 – Pgen Module for Python
- jriehl at spaceship.com (Jonathan Riehl)
- Standards Track
Table of Contents
- Implementation Plan
- Reference Implementation
Much like the
parser module exposes the Python parser, this PEP
proposes that the parser generator used to create the Python
pgen, be exposed as a module in Python.
Through the course of Pythonic history, there have been numerous
discussions about the creation of a Python compiler . These
have resulted in several implementations of Python parsers, most
parser module currently provided in the Python
standard library  and Jeremy Hylton’s
compiler module .
However, while multiple language changes have been proposed
 , experimentation with the Python syntax has lacked the
benefit of a Python binding to the actual parser generator used to
By providing a Python wrapper analogous to Fred Drake Jr.’s parser
wrapper, but targeted at the
pgen library, the following
assertions are made:
- Reference implementations of syntax changes will be easier to
develop. Currently, a reference implementation of a syntax
change would require the developer to use the
pgentool from the command line. The resulting parser data structure would then either have to be reworked to interface with a custom CPython implementation, or wrapped as a C extension module.
- Reference implementations of syntax changes will be easier to distribute. Since the parser generator will be available in Python, it should follow that the resulting parser will accessible from Python. Therefore, reference implementations should be available as pure Python code, versus using custom versions of the existing CPython distribution, or as compilable extension modules.
- Reference implementations of syntax changes will be easier to discuss with a larger audience. This somewhat falls out of the second assertion, since the community of Python users is most likely larger than the community of CPython developers.
- Development of small languages in Python will be further enhanced, since the additional module will be a fully functional LL(1) parser generator.
The proposed module will be called
pgen module will
contain the following functions:
parseGrammarFile (fileName) -> AST
parseGrammarFile() function will read the file pointed to
by fileName and create an AST object. The AST nodes will
contain the nonterminal, numeric values of the parser
generator meta-grammar. The output AST will be an instance of
the AST extension class as provided by the
Syntax errors in the input file will cause the SyntaxError
exception to be raised.
parseGrammarString (text) -> AST
parseGrammarString() function will follow the semantics of
parseGrammarFile(), but accept the grammar text as a
string for input, as opposed to the file name.
buildParser (grammarAst) -> DFA
buildParser() function will accept an AST object for input
and return a DFA (deterministic finite automaton) data
structure. The DFA data structure will be a C extension
class, much like the AST structure is provided in the
module. If the input AST does not conform to the nonterminal
codes defined for the
parseFile (fileName, dfa, start) -> AST
parseFile() function will essentially be a wrapper for the
PyParser_ParseFile() C API function. The wrapper code will
accept the DFA C extension class, and the file name. An AST
instance that conforms to the lexical values in the
module and the nonterminal values contained in the DFA will be
parseString (text, dfa, start) -> AST
parseString() function will operate in a similar fashion
parseFile() function, but accept the parse text as an
argument. Much like
parseFile() will wrap the
PyParser_ParseFile() C API function,
parseString() will wrap
symbolToStringMap (dfa) -> dict
symbolToStringMap() function will accept a DFA instance
and return a dictionary object that maps from the DFA’s
numeric values for its nonterminals to the string names of the
nonterminals as found in the original grammar specification
for the DFA.
stringToSymbolMap (dfa) -> dict
stringToSymbolMap() function output a dictionary mapping
the nonterminal names of the input DFA to their corresponding
Extra credit will be awarded if the map generation functions and parsing functions are also methods of the DFA extension class.
A cunning plan has been devised to accomplish this enhancement:
- Rename the
pgenfunctions to conform to the CPython naming standards. This action may involve adding some header files to the
- Move the
pgenC modules in the Makefile.pre.in from unique
pgenelements to the Python C library.
- Make any needed changes to the
parsermodule so the AST extension class understands that there are AST types it may not understand. Cursory examination of the AST extension class shows that it keeps track of whether the tree is a suite or an expression.
- Code an additional C module in the
Modulesdirectory. The C extension module will implement the DFA extension class and the functions outlined in the previous section.
- Add the new module to the build process. Black magic, indeed.
Under this proposal, would be designers of Python 3000 will still be constrained to Python’s lexical conventions. The addition, subtraction or modification of the Python lexer is outside the scope of this PEP.
No reference implementation is currently provided. A patch was provided at some point in http://sourceforge.net/tracker/index.php?func=detail&aid=599331&group_id=5470&atid=305470 but that patch is no longer maintained.
This document has been placed in the public domain.
Last modified: 2022-01-21 11:03:51 GMT