Introduction

This library has its origin in some study project I am currently
working on. All source code is covered by the LGPL - this means
you can freely use the code in any project (commercial or non-commercial).

How it works

The tokenizer uses a hierarchical map structure to store the tokens, and should be as fast as the paragon
lexx (at
least n*O(lexx) . The analyzer does not have the restriction of being a LALR(1) analyzer (if used correctly), has some
caching capabilities and allows resolution of literal tokens. It also supports a precedence prioritized rule set.

All projects make heavy use of the STL. Because the STL implementation shipped
with MSVC6 is not the best option, you should download the STL Port 4.5.1 implementation
freely available at www.stlport.org. I have now managed
to get rid most of the C4786 warnings (hint: switch /FI).

How to setup the analyzer/tokenizer

A typical rule definition for a simple expression evaluator looks like this:

This initializes the tokenizer to recognize the separators '+', '-', '*', '/', '^', ';', '(', ')' and
to skip the usual white space characters, and initializes the analyzer to recognize
the math rules '+,*,-,/,^' and declares that '*' has a higher priority than '+', for example.

The interface class

Usually you have to deal only with one class, cxtPackage. This class
exports all methods needed to access the library. Some of them are:

vSetInputStream() - sets the input stream

vSetDelimeterIDs() - can be used to set the end-of-statement tokens, in C++ this could be ';'

nReadUntilDelimeter() - parses the input stream until the next delimiting token is found or the
end of the input stream is reached

papbCheckForRule() - analyzes the token stream for a given rule and returns a parse tree (if successful)

vRebalance() - rearranges the parse tree according to the precedence priority rules

Those are the most important ones. For more details please see the sample project or
check the page http://www.subground.cc/devel
which will have some minimal documentation on the classes available for download soon.

New: The grammar IDE

The new grammar IDE included in the complete download provides a environment to develop and test analyzer
rulesets. It has some syntax-highlighting features, shows errors by marking the lines in the editor and has an
integrated test environment to live-check the results of the ruleset. I have no documentation yet and the IDE is
still early beta and it has some cosmetic bugs (for example, it is possible to insert via clipboard RTF-formatted
text into the editor , but most of it is already usable.

The sample projects

There are two sample projects included in the complete download. One is an almost-empty sample application
you can easily use to explore the library, and the other project, simpleCalc, shows how to use the library
to build a simple expression evaluator in 200 lines. A step-by-step explanation
on how the sample works is here.

How to use it in own projects

Make sure to insert the projects cxTokenizer, cxAnalyzer and
cxtPackage in the workspace of your project. Adjust the dependencies of
your project to depend on cxtPackage (which itself depends on cxAnalyzer,
which in turn depends on cxTokenizer).
If your project doesn't use the MFC, you have to include the file common.cpp
which is located in the base directory of the project files in your project.
If you have problems or questions, feel free to mail them to alexander-berthold@web.de.
For the most recent updates, see also http://www.subground.cc/devel

Updates

2002-01-02
C++ (w/o templates and pre-processor) grammar is now included. See also here.

2001-12-29
Fixed small bug in analyzer. IDE download now includes a not yet complete C grammar.

2001-12-27
Updated Homepage-URL to ad-free host.

2001-12-26
Uploaded new release including the grammarIDE and some enhancements in the analyzer.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

I need to be able to represent rules that represent modifying an existing parse tree (changing leaves, changing ordering of non leaf branches, changing structures of branches, removing a branch, removing a branch and replacing it with another instance of that abstract branch type).

The operations I need to perform are given a word and parse tree perform mutations involving the word and its parse tree. I perform mutations based on rules that might need to use knowledge of its parse tree.

If anyone can give a lead to references or source code it would be appreciated.

This software seems excelent to me. I dont understand the comparisons with flex / bison since with them you cannot obtain a parse tree. Surely the great thing about this is being able to parse user input expressions, and then evaluate them at a later triger point.

I was wondering if you had any facility to convert the parse tree to a post fix array or any other routines that could enable making it easier to actually invoke the tree resolving symbols to data or functions in the a program.

Clearly for most applications, its only the evaluation time thats relevant.

I have some preliminary results to share. I wrote a lex-based calculator very similar to Alexander's simpleCalc program. I then wrote a program called genfile that can generate arbitrarily-long mathematical expressions with arbitrary and random parenthesis nesting for the calculators to work on. I generated a file that was 10 MB in size with a maximum nesting depth of 20. I ran this file through my bison/flex calculator and it took 13.7 seconds for the file to parse and evaluate the expression. I then ran the file through Alexander's simpleCalc program. It is still running as I write this, 42 minutes later.

I have provided full source and projects to Alexander so that he can perform additional tests/comparisons, but it seems obvious that this system performs several orders of magnitude worse than bison/flex for an equivalent implementation.

Hi,
your comparision might be quite missleading.
If Alexander's package builds a parse tree, this tree will use at least 20 to 50 times more memory than your comparable bison/flex program. The long running time in your test may therefore be a result of swapping.
Right?

The point is, does a simple calculator need a parse tree? Probably not!

You are correct, of course. The calculator is an oversimplified example. Obviously it doesn't need a parse tree as I was able to write my version completely in flex.

As best I can recall from the testing, no hard drive swapping was involved. I ran both programs on a development server with 1 GB of RAM. The real problem seemed to be the dynamic lexing in Alexander's program. My point was just to demonstrate that his program was not speed-neutral to or faster than equivalent parsers built using standard tools, where those tools were applicable.

There are certainly other cases where the extra features of Alexander's program make possible parsers that can't be done (easily, efficiently or even at all) in bison/flex. But for nontrivial inputs in equivalent problem domain implementations, I'm confident that properly-implemented bison/flex will outperform this package every time, often by considerable margins.

Granted lex and yacc (really, flex and bison) produce pretty ugly code. But you should never need to even look at that code. In my MFC projects that need parsing I include the clean, understandable .l and .yy files in my project with custom build steps that invoke flex and bison on them. The resultant files are included in the build without my ever needing to see them.

The advantages of using the existing tools are: Flex and bison are free, available today, extremely fast, portable and almost completely rock-stable; their behavior is well-defined and lots of people know how to use them. Contrast this to your yacc replacement, which will be buggy, and will have non-standard syntax.

Really, we should all be happy that flex produces ugly code. It's that static, ugly code that makes it so screamingly fast since the token map is implemented in compilable, optimizable, C code. Your solution implements lexing dynamically which almost guarantees that it will be drastically slower than flex for equivalent tasks.

I would be very interested in performing a comparison test using a complex grammar such as C++ with a very large dataset. I'm not sure that the data you provide is directly comparable, however, for the following reason:

When a scanner and parser are generated by flex and bison, the resulting code takes the input text and analyzes it with no assumptions as to what rule will end up as terminal. Your grammarIDE program seems to require me to specify what type of syntactic element I think the input text will be, which bypasses some analysis and is unrealistic for real-world applications. I tried to paste some arbitrary C++ code into the grammarIDE and analyze it, but except for your example and one variable declaration I could not get anything to resolve. Perhaps I did not specify the correct rule to evaluate.

Is your grammar complete? Do you have a large block of sample C++ representing an entire nontrivial program that you can feed through your grammar using a rule like ".program" and get it to parse? If so, I'd be interested in timing it against an equivalent flex/bison C++ grammar.

If you like, we can use a simpler grammar and a very large amount of input (say, parsing sentences out of the contents of Project Gutenberg) -- this might make the comparison more straightforward by rendering the respective inefficiencies of the different grammar implementations less of an issue.

I have made a C grammar and for the pseudo-C code at http://www.subground.cc/devel/sample-pseudo-c-prog.txt it takes 80 msec to tokenize and 67 msec to analyze and build the parse tree (".globalscopeblock"). I corrected also a small bug, if you want to try the grammar, please download again the binary.

The idea of trying a large dataset is interesting. Maybe i can contact you via mail?

Thank's for the code, good job!! Just a correction.
Mathematic equations are evaluated from right to left, not from left to right.
If the example equation shown above had been:
1-2*(3-4)+8
your program would have divided it as:
1-expr, where expr = 2*(3-4)+8
and this is wrong. It should be: expr+8, and then divide expr as: 1-expr2.

Hi, thanks for your comment.
Intentionally i did not try to get too deep into 'lexx' and 'yacc' before writing this library, so i do not really have experience using them.

But i think 'cxtPackage' is comparable to lexx and yacc. In fact, i am currently writing a C++ compiler built upon 'cxtPackage' and w/o templates i managed up to now to implement >70% of the language grammar using cxtPackage, even complex expressions like (int *(const *a[5])[3]) can be parsed without modifications.
I also never took a closer look at 'Antlr', but i will in the near future.

In terms of speed i'd say 'cxtPackage' could be even faster compared to lexx/yacc, but i have never tried this also.

I found that installing SP5 went a long way to getting rid of the troublesome 4768 warnings. I think you still need the #pragma directive to turn them off, but in SP5 this directive seems to work properly.

Naming conventions save everybody a lot of time when it comes to reading another person's code... I have found that people are using them less and less often and I attribute this to the large increase in the number of data types that have been "invented" with Windows programming and OOP.

I personally see this as an opinion type question; If you don't use hungarian notation consistently and have algorithms with obvious variable types and names, all is fine.
For windows only programs, i also do not use hungarian notation - believe it or not
But some parts of the code i submitted here deal with elements of a hierarchical linked list ('node' and 'branch' and 'element' type objects). A 'node' element contains - among other - a 'token' ...
I found it useful in this case to use it because with a quick look i can see to which part of the data structure a variable belongs to. My personal opinion is to use any tool / method helping to solve a given problem most; this can be even Visual Basic

In this case, it's not so much a question of "ancient" as it is K&R C vs C++. K&R C (without prototypes) didn't do any type checking of parameters--you could happily call a function like void foo(int a) with a double, and the compiler wouldn't stop you. The usual symptom was a GP fault with a nearly-unintelligible stack at some later point in your program. Made for fun debugging.

C with prototypes (as well as C++) takes care of this; some people see additional value in embedding type information in variable names for humans to read.

Usually i do not use hungarian notation. Please believe me
But in data structure intensive projects it can be nice to see by just a look which type a variable has. This is not important in UI or business logic, but it has its purpose sometimes

I don't really think you can develop any serious project without a notation. The Hungarian notation is used for many projects, the Microsoft ones for example. I think It proves that you have to choose a notation before starting a project.

It avoids conflict with other libraries or sources, It cleans up the whole source codes because they look similar. If you don't use prefix you can get things like: (a class) execute, (a function) execute. I really think it's better to write: CExecute and Execute for the functions. Moreover if everybody use the same notation, it's like talking the same natural language.

So, it's not the notation that really matters, the important thing is to choose one.