- Treating keywords as identifiers, from the outset, can accelerate identification of keywords via lookup after the fact.

Yup. This was actually suggested in the compiler design class I took, and it's the approach I generally use. If the number of keywords is small enough I'll even just hardcode a little parser using switch statements for the matching, though this isn't a great general solution.

Quote:

- The DMD frontend doesn't use a hashtable for keyword lookup?!

The DMD frontend actually seems a bit inefficient in places--this being one of th obvious deficiencies. Doesn't make sense to me either.

Enki is almost ready for release for Win32, thanks largely to H3r3tic. I put off commiting his improvments in favor of getting documentation, sub-project management, and building under control.

In short, things have been reorganized slightly to accomodate the unique layout of this project.

New Build System

The build/ directory contains a subtree for each project hosted under DDL: enki/, meta/ and ddl/ itself. Within each of those, are .d scripts that can be executed to create documentation, roll up an SDK or create an executable.

For example, the following will build a set of DDL utils:

Code:

build -exec build/ddl/utils

And this, will generate the documentation for Enki:

Code:

build -exec build/enki/doc

The only caveat is that these must be run from trunk/, with utils/Script.d in the usual place. The SDK downloads already have these built in.

Documentation

I took a few cues from CanDyDoc and created an AJAX-based doc system that works very well with the way DMD likes to emit .html files. The only wart is having to set up a modules.ddoc file, in much the same way as CanyDyDoc. The rest works quite transparently, and plays well if the documentation is stored locally - it doesn't require a web-server to work.

It's also a very practical improvement over my former XML/XSL based solution.

For now, the presentation in the docs is incomplete, and very plain. As DSource is presently going through some changes, the webserver here is still serving up the old docs. Please feel free to download the SDK, or draft the latest doc tree from SVN to see for yourself. _________________-- !Eric.t.Anderton at gmail

Posted the imminent RC1 release out on the DNG. Thanks everyone for your support.

Also hacked around with the ELF loader some, and am considering how to tackle the ongoing documentation effort.

ELF

Okay, this begins what may be yet another entry in a series on "specifications leaving too much detail out". Today's lesson: how to not to parse an ELF object file.

No doubt the authors of the ELF32 spec had C on the brain. As if the heading "Chapter 3 - C Library" wasn't a big enough clue, I began to notice lots of places where pointer arithmetic could speed things up. All of the internal record types are heavily referential to one another, even to the point of using byte-wise offsets into other sections - namely: string tables.

(all rise) Please open your manuals to the book of ELF32, section 1-16
10 and his great noodlyness spoke;
20 and lo, it was really weird;

Code:

As the example shows, a string table index may refer to any byte in the section. A string may appear
more than once; references to substrings may exist; and a single string may be referenced multiple times.
Unreferenced strings also are allowed.

- Ramen

Anyway, It's the "references to substrings" part that got me thinking. I think the easiest way to process an ELF file is not via the traditional "parse, interpret, store" idiom. Instead, I'm going to opt for reading in the entire object file as a big slab of data, and use pointers to set up the various arrays. From there, a semantic pass at the ELFModule level can create the ExportSymbol set similar to what OMFModule is already doing.

The slab will then be released along with the enclosing ELFBinary object, once the OMFModule has what it needs.

This method should reduce both memory consumption and parsing overhead. This means that I can concentrate more of my time on semantics, rather than on where I'm going to keep all these redundant lookup tables._________________-- !Eric.t.Anderton at gmail

Progress has been slow due to RealLife(tm) issues, as I'm sure most everyone else is right now - depending on where you are professionally you are:

1) Starting a new semester at school
2) Dealing with the end/start of the fiscal year at work
3) Both

For those in category #3, we salute you. I'm mired at #2 at the moment - actually, I'm up to my eyeballs in #2, but that's another story.

The gist of what's been going on here has been largely in the wiki. I am trying to push the docs into a more usable form, all the while, documenting little pieces that exist mostly inside my head. So this exercise will prove useful for everyone involved, myself included.

The most dramatic change was moving a reflection tutorial to the beginning of the set of DDL tutorials. Yea, it sounds crazy, but I think it improves the flow of the tutorials from n00b to DDL expert.

There's another subtlety that I picked up on: Using the -L-map option on the command line for build/dmd truely is significant. In a lot of cases, it doesn't matter, but I noticed that members that could easily be culled from an app (e.g. never-used functions) are not listed in a standard .map ifle.

Another thing: I'm suddenly writing a *lot* of code snippets. I think I'm going to have to bundle these somehow for download, so folks can follow the tutorials a little easier._________________-- !Eric.t.Anderton at gmail

My status recently can be described as "mostly stalled out". It's been an interesting fall so far, grappling with long hours at the office and handling real life as it happens. But, I'm still working on things, just much slower than I anticipated.

AgentOrange informed us all in the DNG that he's working on refacoring COFF support for the more recent DDL API. ELF support is still on the table, pending my getting off my butt to get it refined and in place. Documentation is also still in progress, and is showing signs of improvment. Enki bugfixes are also in the fire, patiently awaiting their inclusion into V1.3 - I may have to hold off on the improved codegen for this.

With respect to documentation, I've experimented with mimicing candydoc with AJAX and individual document tabs, ala www.gotapi.com . This turned into a rather long and sordid affair with various JS toolkits, which ended up where I started: DIY. I could rant for *days* about how what we all really want to be using is XForms and/or a richer XHTML widget set - but this isn't a web-standards blog.

Anyway, the end result should bear some superficial architectural and visual similarities to candydoc, but will make for a deeper doc browser in the end. I'll post here when I have more to share on this front.

The only wart with the doc-browser concept is that IE generates a warning about running active content, and asks permission to execute it. I find this kind of amusing seeing as how this only happens when the browser is *offline*, and Javascript is one of the most heavily sandboxed languages I know of._________________-- !Eric.t.Anderton at gmail

Not much new to report here, so this is mostly a bump to let everyone know that I'm still alive.

I've made some progress on the doc browser, but have precious little to upload to SVN to show for it. Working with YUI and Jack Slocum's YUI-EXT has made Javascript coding almost as powerful as Prototype, with widgets that are far more useful than those I've seen elsewhere.

I spent a lot of time crafting a tab-bar implementation that is fully cross-browser; believe me, this wasn't easy. Combined with the typical feature set of candydoc (which will be refactored and folded into this), I think we'll have a formidable online/offline ddoc viewer.

If anyone out there knows of a *solid* JS tab control solution, one that can cope with adding/removing tabs into/from a dynamically laid-out container (ala Yahoo mail), I'd like to hear about it.

Reflection

Even though DDL has limited reflection potential, I still think there's value in crafting an interface a bit more formal than the template methods on DynamicLibrary. To that end, I've crafted the start of something... a little different.

I've drawn considerable inspiration from an unlikely source: the HTML Document Object Model. I realized that the flexibilty of haivng a super-broad tree node class allows for some pretty rich usability in web applications, so why not here too? Granted, I think that design consideration was done mostly to accomodate javascript, but it still applies in D.

The nodes used in this "Reflection Object Model" are all subclasses of DSymbol, so casting can be used to determine object type if you want a more traditional solution. Otherwise, you can use the "soft" approach of surfing the namespace (like above) and rely on exception handling to cover for missing data or mistaken node types.

The nodes also map back to ddl.ExportSymbol, where appropriate, if the need arises. TypeInfo references are also resolved for fields and functions, which is nice too.

Bug - Non-Referenced Host Functions in DDL

There's a small issue regarding .map files in the current DDL release. Due to the way that OPTLINK optimizes code, functions in the host exe that aren't referenced at link time will get thrown out - they won't even appear in the .map file.

For now, the workaround is kind of a hack: simply anchor the unreferenced function to a field that never gets used.

Code:

void nobodyCallsMe(){}
void function() __hack = &nobodyCallsMe;

This works since OPTLINK doesn't throw out un-referenced fields.

The tutorial currently uses 'export' instead to solve this issue, which I find to be a more tasteful solution. Since the InSitu loader can't cope with the exports section in the .map file, a fix is pending and will be rolled into 1.3. More on this when I commit a patch._________________-- !Eric.t.Anderton at gmail

The two tickets were intended to help narrow the field for possible issues with the bugs H3r3tic was getting. The last one is really only nameable now - it was very hard to find the nature of the problem. It took two people, megabytes of debug dumps and a lot of detective work.

The mention of COMDAT symbols should ring a bell with some of you. Walter mentioned that these are the favor of symbol used (mostly) by template instances. The idea is that COMmon DATa symbols like this are intended to be redundant within a given linker pass, and should reduce to a single shared instance.

What's interesting is that OPTLINK has trouble when a module has *nothing but* COMDAT symbols - you need to provide at least one normal symbol type to get OPTLINK to link that module. DDL doesn't have this problem since there are no second-class symbol types.

Of course this has left my timeline hoplessly skewed, again. While I'd like to release alongside D 1.0, the best we're going to get is a win32 rendition with documentation and an SDK, but there's still going to be features lacking. Things like deeper reflection and ELF support may have to wait even longer still._________________-- !Eric.t.Anderton at gmail

This is partially a *bump* to let everyone know that the project is still alive, but is on the back-burner at the moment.

After reviewing the release notes for DMD 0.176, it looks like Walter placed Don's reccomendations for name-mangling into effect. This means that some small portions of the linker's internals (namely ModuleInfo discovery) are likely to break if you're using 0.176 or later.

Once this release settles out, and I get my other work wrapped-up, DDL will get a touch-up to cover this change. _________________-- !Eric.t.Anderton at gmail