The in-memory representation is to be a tree of MusicElementNodes (better name later) which are basically a nearly 1:1 mapping from MusicXML elements and attributes to Smalltalk classes and instance variables.

The graphic representation will be Notation Morphs (better name later). These morphs will be a view using the MusicElement tree as a model.

To play a score, a Squeak MIDIScore will be generated from the MusicElement tree. MIDIScores can play and save themselves.

The MusicElementNodes will have methods such as generateXML, generateMorph, and generateMIDI. Invoking one of these methods on the root node will cause an appropriate structure to be built recursively.

To generate a MusciElement tree from a MusicXML file, an event-based XML parser will be used.

The transformation of a MusicElement tree to Notation Morphs and MIDI is one way. The opposite transformation is not needed. Notation Morphs, if used for editing, affect changes to the original tree directly. MIDI is rarely edited directly.

Questions/Issues:

We can expect these representations to be quite large, possibly exceeding available memory. How will that be handled? Using the event based XML parser means the XML file size doesn't matter, but a MusicElement tree of nearly the same size is created in memory.

The MusicXML DTD is complicated enough that I don't understand it very well yet. It is large enough that the whole thing wouldn't be supported for a while.