Creating grammars
Tokenization of rules
For now the tokenization of rules is done only at the character level.
Regular grammars (RG)
Create a RegularGrammar
from a string:
from maquinas.regular.rg import RegularGrammar as RG
g=aes_b_ces=RG('S → aS; S → bA; A → ε; A → cA') #a*bc*
g.print_summary()
Context free grammars (CFG)
Create a ContextFreeGrammar
from a string:
from maquinas.contextfree.cfg import ContextFreeGrammar as CFG
g=CFG("S-> ACB; C-> ACB; C -> AB; A -> a; B->b")
g.print_summary()