The long range goal here is to define a data structure that adequately models musical events. As time is the fundamental domain parameter for music composition systems its consideration comes first and I begin with an number of assertions about/for which I invite questions, extensions, corrections and objections.

Performance Time:

Performance time is essentially real time. For the most part, creating playable MIDI streams from a music application's data structures requires only knowledge defined for the instant in time being transformed into MIDI. That is, the MIDI values at time t depend only on application data defined at time t or:

MIDI(t) = f(t)

This is however a limited approximation. Many musical instruments are able to connect consecutive notes with various kinds of phrasing and many of these phrased connections require prior knowledge of the characteristics of the next event. To my knowledge only Csound and Synful Orchestra can handle such processing. For this case we would say

MIDI(t) = f(t, t1)

where t is the time for which we are constructing MIDI data and t1 is the time of the next event.

These two expressions encompass everything that is needed to convert an app's internal music representation to MIDI.

Composition Time

Many if not most compositions have meter. Meter provides an abstraction of time that is regular in regard to rational event subdivision but elastic in its transform (mapping) to real time. The way time is represented in score in the context of meter we will call composition time and denote it by T.

If we are constructing a composition in non-real-time we have the opportunity to directly associate an event with its context. An event's position in a score provides its indirect associations with the score. This kind of association is lossy in terms of data connections compared to direct association. For example if a note is the third event in a motif that association can be established only by extensive data analysis and might even be missed. On the other hand, if the note's data construct posses a reference to its position in its source motif, we have that fact directly. A source motif is an abstraction that has no location in time even though it may have a duration.

Generally in human composed music the data values of an event are dependent on the events that preceded and follow the given event. At em2009 ark asked if a pause included in a score to account for applause would be considered to be in composition time or performance time. My answer was that it is part of composition time. This is because the existence and duration of such an event would depend on the dynamics of the salience of the music for some duration both before and after the time when the pause occurs.

From the above it is clear that at time T the data values of a composition are diverse functions of multiple parameters that may include points and spans of composition time as well as data concerning many other non-temporal abstractions. Thus

Many musical instruments are able to connect consecutive notes with various kinds of phrasing and many of these phrased connections require prior knowledge of the characteristics of the next event. To my knowledge only Csound and Synful Orchestra can handle such processing.

If I understand the model correctly thus far, PD, Max, and by extension Max4Live under Live 8 also have this capability.

Here is a link to a thread describing work I am doing on implementing an efficiently computable abstraction of meter._________________The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham

I'm a pragmatist, so while I find this very interesting, it's hard to get it all in without having a clear understanding of the goal.

You say that we need to adequately model musical events. What would you say happens when we have this model in place - i.e. what do you value as "adequate"?

I gather the quest is for some kind of notation, with the focus at this stage being on timing. Who or what will read this notation? Man or machine? Who will write it? Will it be edited on a computer or written on paper? What is more important, ease of noting music down or ease of reading?

My first thought when reading this was that what we need is a sequence of events, plus for each event we need to know the current state of the music (i.e. what has happened before and what will happen afterwards) and how the event affects the state.

One thing I thought about was the state machines that I've seen in computer science: you have a diagram of circles with lines between them. One of the circles is the start circle. Each line has a note on it and each circle does some calculation which line it should follow to the next circle. This would create a tune - the difficulty in making this manageable would be describing the conditions in the circles.

I'm a pragmatist, so while I find this very interesting, it's hard to get it all in without having a clear understanding of the goal

The ultimate goal is another attempt at this but as an open source project.

Antimon wrote:

I gather the quest is for some kind of notation, with the focus at this stage being on timing. Who or what will read this notation? Man or machine? Who will write it? Will it be edited on a computer or written on paper? What is more important, ease of noting music down or ease of reading?

I raised the issue of time simply to clarify the distinction between performance time and composition time. The system I am working on presents the data behind a composition through various views of various abstractions. Standard score notation is the one on which I am focusing first. Any abstraction that is modeled by a function whose domain is time is defined within some channel (not a midi channel). The system has channel view windows that can display the users choice of channels and related data: staff views, midi piano-rolls, figured bass etc., etc.._________________The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham

If I understand the model correctly thus far, PD, Max, and by extension Max4Live under Live 8 also have this capability.

Thanks for the update! I find it difficult to keep abrest of developments.

ChucK as well! And I imagine Supercollider.

In fact, Chuck is first and foremost strongly timed. Delay/synchronization is one of the fundamental programming constructs in ChucK._________________When the stream is deep
my wild little dog frolics,
when shallow, she drinks.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum

Please support our site. If you click through and buy from our affiliate partners, we earn a small commission.