With such a design, processors can simply be chained together:
A parser creates an AST, which is passed to the linker (creating a table of contents on the fly) which passes it further down to a formatter.
parser = ... linker = ... formatter = ... ast = AST() ast = parser.process(ast, input=['source.hh']) ast = linker.process(ast) ast = formatter.process(ast, output='html')
And, to be a little bit more scalable, and to allow the use of dependency tracking build tools such as make, the intermediate asts can be persistet into files. Thus, the above pipeline is broken up into multiple pipelines, where the 'output' parameter of the parser is used to point to ast stores, and the 'input' parameter of the linker/formatter pipeline then contains a list of these ast store files.
Parse source1.hh
and write the ast to source1.syn
:
parser.process(AST(), input = ['source1.hh'], output = 'source1.syn')
Parse source2.hh
and write the ast to source2.syn
:
parser.process(AST(), input = ['source2.hh'], output = 'source2.syn')
Read in source1.syn
and source2.syn
, then link and format
into the html
directory:
formatter.process(linker.process(AST(), input = ['source1.syn', 'source2.syn']), output = 'html')