Introduction

This library has its origin in some study project I am currently
working on. All source code is covered by the LGPL - this means
you can freely use the code in any project (commercial or non-commercial).

How it works

The tokenizer uses a hierarchical map structure to store the tokens, and should be as fast as the paragon
lexx (at
least n*O(lexx) . The analyzer does not have the restriction of being a LALR(1) analyzer (if used correctly), has some
caching capabilities and allows resolution of literal tokens. It also supports a precedence prioritized rule set.

All projects make heavy use of the STL. Because the STL implementation shipped
with MSVC6 is not the best option, you should download the STL Port 4.5.1 implementation
freely available at www.stlport.org. I have now managed
to get rid most of the C4786 warnings (hint: switch /FI).

How to setup the analyzer/tokenizer

A typical rule definition for a simple expression evaluator looks like this:

This initializes the tokenizer to recognize the separators '+', '-', '*', '/', '^', ';', '(', ')' and
to skip the usual white space characters, and initializes the analyzer to recognize
the math rules '+,*,-,/,^' and declares that '*' has a higher priority than '+', for example.

The interface class

Usually you have to deal only with one class, cxtPackage. This class
exports all methods needed to access the library. Some of them are:

vSetInputStream() - sets the input stream

vSetDelimeterIDs() - can be used to set the end-of-statement tokens, in C++ this could be ';'

nReadUntilDelimeter() - parses the input stream until the next delimiting token is found or the
end of the input stream is reached

papbCheckForRule() - analyzes the token stream for a given rule and returns a parse tree (if successful)

vRebalance() - rearranges the parse tree according to the precedence priority rules

Those are the most important ones. For more details please see the sample project or
check the page http://www.subground.cc/devel
which will have some minimal documentation on the classes available for download soon.

New: The grammar IDE

The new grammar IDE included in the complete download provides a environment to develop and test analyzer
rulesets. It has some syntax-highlighting features, shows errors by marking the lines in the editor and has an
integrated test environment to live-check the results of the ruleset. I have no documentation yet and the IDE is
still early beta and it has some cosmetic bugs (for example, it is possible to insert via clipboard RTF-formatted
text into the editor , but most of it is already usable.

The sample projects

There are two sample projects included in the complete download. One is an almost-empty sample application
you can easily use to explore the library, and the other project, simpleCalc, shows how to use the library
to build a simple expression evaluator in 200 lines. A step-by-step explanation
on how the sample works is here.

How to use it in own projects

Make sure to insert the projects cxTokenizer, cxAnalyzer and
cxtPackage in the workspace of your project. Adjust the dependencies of
your project to depend on cxtPackage (which itself depends on cxAnalyzer,
which in turn depends on cxTokenizer).
If your project doesn't use the MFC, you have to include the file common.cpp
which is located in the base directory of the project files in your project.
If you have problems or questions, feel free to mail them to alexander-berthold@web.de.
For the most recent updates, see also http://www.subground.cc/devel

Updates

2002-01-02
C++ (w/o templates and pre-processor) grammar is now included. See also here.

2001-12-29
Fixed small bug in analyzer. IDE download now includes a not yet complete C grammar.

2001-12-27
Updated Homepage-URL to ad-free host.

2001-12-26
Uploaded new release including the grammarIDE and some enhancements in the analyzer.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Hi, thanks for your comment.
Intentionally i did not try to get too deep into 'lexx' and 'yacc' before writing this library, so i do not really have experience using them.

But i think 'cxtPackage' is comparable to lexx and yacc. In fact, i am currently writing a C++ compiler built upon 'cxtPackage' and w/o templates i managed up to now to implement >70% of the language grammar using cxtPackage, even complex expressions like (int *(const *a[5])[3]) can be parsed without modifications.
I also never took a closer look at 'Antlr', but i will in the near future.

In terms of speed i'd say 'cxtPackage' could be even faster compared to lexx/yacc, but i have never tried this also.

I found that installing SP5 went a long way to getting rid of the troublesome 4768 warnings. I think you still need the #pragma directive to turn them off, but in SP5 this directive seems to work properly.

Naming conventions save everybody a lot of time when it comes to reading another person's code... I have found that people are using them less and less often and I attribute this to the large increase in the number of data types that have been "invented" with Windows programming and OOP.