Nikos, who is responsible for the macrocompiler, has showed a lot of performance tests during his sessions at Cologne, so speed is a issue for the development team.
The next thing ist that X# macros are effectively compiled to .NET binary code.

I'm expecting a lot more speed from the "fast" version of the macrocompiler.

It's two separate things, one is the speed of the macro compiler itself, when for example macro compiling an expression inside a string (like a filter expression) to a codeblock and then the execution speed (Eval()uation) of that macro compiled codeblock itself, or of normal codeblocks specified by the programmer.

The execution speed of codeblocks should be very close already to the execution speed that you'd get if the code in the codeblock was actually regular X# code, compiled by the regular X# compiler. What Nikos is working on, is to improve the speed of the macro compiler itself, so that compiling (code inside) strings into codeblocks will be a lot faster.

Unfortunately I think you forgot to add the attachments. But I checked the code and it is using "Vulcan.Codeblock", this should not compile at all when you use the X# runtime (the class name now is XSharp.Codblock, but better just use CODEBLOCK instead). Maybe you have included references to both the X# and Vulcan runtime in the same app, causing the runtime issues you are seeing?

A few additional general considerations:

- Both in X# and Vulcan, the first macro compilation will take much time, in order to load the macro compiler in memory and initialize it. So any practical speed tests for the macro compiler should include a first call to the macro compiler that is not taken into account when measuring time needed to complete tests.

- In X#, the macro compiled expressions get cached and get reused when needed, so when you use the same macro expressions often, there will be a big performance boost over vulcan (and VO?) which did not have this caching feature IIRC. And using the same macro expression several times is very common I think in real apps.

- At this moment, it does not really make sense comparing the speed of the X# macro compiler itself to the macro compiler of vulcan or VO (how fast MCompile() executes). The current first version of the X# macro compiler supports theoretically everything the regular compiler supports, too, which is way way more than what the vulcan and VO macro compilers can do. Because of that it is comparably slow and thus can be used only when macro compiling speed is not crucial. But we are working on a compact version of the macro compiler which will still have all the necessary functionality and will be literally 1,000s or 10,000s or even more times faster than the current one. Only when we release this one, it will make sense to test the speed of the macro compiler itself.

- The speed of the code in the runtime codeblocks produced by the macro compiler (how fast Eval()/MExec() etc executes) should be already good and this is indeed an aspect of the runtime really worth making performance tests on!

File Attachment:

But I checked the code and it is using "Vulcan.Codeblock", this should not compile at all when you use the X# runtime

That was the only thing I had to change. The Vulcan version uses Vulcan.Codeblock, the X# version _CodeBlock. The XIDE "intellisense" even when the application is using the X# runtime shows the prototypes from the Vulcan runtime - that is something you should change.

About your other considerations: I'm not experienced in performance tests, but I know what I'm using in my applications, and therefore I have built these checks.

About 50% of my codeblocks are compile time codeblocks (mostly used in ASCan() and ASort() functions), and they are slower with the X# runtime, but not so much slower than the Vulcan runtime. This is something I can live with, because the overall time is not very long, and I don't think this is something a user will notice

the other codeblocks I'm using are constructed at runtime, most in my reporting engine, but I'm using the macro compiler heavily also in other areas - to not forget the indexing of DBF files where the macro compiler is heavily used. I know that the compile is slow and the runtime fast - even in Vulcan, and that is shown with the second part of the performance test

I know about the macro compiler caching in X#, and therefore I had expected that the third test with the X# version would outperform not only the Vulcan version by ages, but also the VO version.

And since the times here are really long, this is a very important part of the checks.

I know about the fast and limited macro compiler Nikos is working on, and I expect it to be the fastest version of all these, but we have to wait until it is ready.

Fortunately for your team performance is an issue, and both Robert and Nikos as the compiler guys know that the macro compile and execution time is really important for xBase applications.

Therefore I'm expecting that the overall performance of migrated VO applications will be better than of the VO versions, even if the environment (.NET) should lead to lower performance that natively compiled applications.

We use 1,000s of code blocks in some of our apps. We encapsulate the code block in another class and only Compile it when its executing the first time. The concept works very nicely, like a Just-in-time.

In our tests, we were compiling the code block once, and then evaluating millions of times. The tests included string, number, and date field relational and equal operations, some basic math operations, and some other tests, but no array scans.

In our limited testing, xSharp was faster than Visual Objects. That was so inspiring that I just got the Advantage data provider working. Had to use the 32bit options, but I am now querying on real data.