This is the fifth version of my .NET MIDI toolkit. I had thought that the previous version was the final one, but I have made many changes that have warranted a new version. This version takes a more traditional C#/.NET approach to flow-based programming, which I'll describe below. I wasn't comfortable with version four's implementation along these lines, so I took a step back and made changes that keep the flow-based approach while remaining within C#/.NET accepted idioms. I'm hoping that this will make the toolkit easier to use and understand.

The toolkit has seen many revisions over the past two to three years. Each revision has been an almost total rewrite of the previous one. When writing software, it is usually a bad idea to make updates that break software using previous versions. However, my goal in creating this toolkit has been to provide the best design possible. As I have grown as a programmer, I have improved my skills and understanding of software design. This has led me to revise the earlier designs of the toolkit without regard to how these revisions will break code. Not exactly the attitude one wants to adopt in a professional setting, but since the toolkit is free and since I have used it as a learning experience to improve my craft, my priorities are different.

Before I get into the specifics of the toolkit, I would like to talk about its architecture. With each version of the toolkit, I have struggled with how to structure the flow of messages through the system. I wanted an approach that would be versatile and allow customization. It would be nice, I thought, if users could plug their own classes into the flow of MIDI messages to do whatever they want. For example, say you wanted to write a harmonizer, a class capable of transposing notes in a specified key to create harmony parts automatically. It should be easy to simply plug the harmonizer into the toolkit without affecting other classes. In other words, the toolkit should be customizable and configurable.

Investigating this problem led me to J. Paul Morrison's excellent website on flow-based programming. He has written a book on the subject, which can be found on his website as well as at Amazon.

The idea is simple and will probably seem familiar to most: Data flows through a network of components. Each component can do something interesting with the data before passing it along to the next component. In design pattern terms, this approach is most like the Pipe and Filter pattern and is also similar to the Chain of Responsibility pattern. Please check out J. Paul Morrison's excellent book for more information.

(Just to be clear: when I say "component," I'm not necessarily talking about classes that implement the IComponent interface. I'm speaking in more general terms. A component is simply an object in a chain of objects designed to process the flow of messages.)

Below is a very basic network of components designed to handle the flow of MIDI channel messages:

The flow of messages begins with the input device. An input device receives MIDI messages from an external source. Next, the messages flow through a user component. This component might want to do something like change the MIDI channel, transpose note messages, or change the messages in some way. Then the messages pass through the channel stopper. This component simply keeps track of all currently sounding notes. When the message flow stops, the channel stopper can turn off all sounding notes so that none of them hang. Finally, the messages reach the output device. Here they are sent to an external MIDI device.

Well, this is something I really struggled with. You can read a bit about the different ways I tried to achieve this by reading my blog. I found myself going round in circles on this. In version four of the toolkit, I settled on the idea of using a source/sink abstraction. I created an interface representing "Sources." A source represents a source of MIDI messages. "Sinks" were represented by delegates that could be connected to sources; a sink is simply a method capable of receiving a MIDI message. This worked well but it was a little confusing because the implementation looked a little "funny." That is to say, a C# programmer looking at the code for the first time might be confused as to what is going on.

I decided to do away with the sink/source infrastructure and use something more idiomatic. Sources of MIDI messages raise events when they have messages to send. Instead of implementing an interface and having Connect and Disconnect methods for hooking to sinks, they would simply have events. There are two advantages here: First, sources no longer have to implement an ISource interface, and second, .NET events are something very familiar to us. So sources of MIDI messages now look like your everyday class that just happens to have one or more events.

How about sinks, those objects that can receive MIDI messages? A sink can be anything. It's just an object that has a method that can receive a MIDI message. In version four, I had a Sink delegate for representing methods of objects capable of receiving MIDI messages. These delegates were used to connect with sources. This approach of using delegates to "connect" sources and sinks is still used in the toolkit but not as before. Instead, delegates are used as adaptors that connect to the events raised in sources and adapts the events so that objects that need to receive the messages can do so without any knowledge of the source.

Let's look at an example. Say that we're using an InputDevice to receive MIDI messages from a MIDI device, such as your soundcard. The InputDevice raises a ChannelMessageReceived event each time it receives a channel message. Suppose that we want to keep track of any note-on channel messages so that when we decide to stop receiving messages, we can turn off any currently sounding notes to keep them from "hanging." The ChannelStopper class is just for this purpose. However, the ChannelStopper has no knowledge of the InputDevice class. We need a way to hook them up so that messages generated by the InputDevice can be passed along to the ChannelStopper. Here is how we can do this with an anonymous method:

In this example, an anonymous method adapts events raised by the InputDevice so that they can be processed by a ChannelStopper. The InputDevice is the source of channel messages and the ChannelStopper is a sink capable of receiving and processing channel messages. The nice thing about this approach is that no explicit source/sink infrastructure is needed. Neither class knows anything about being a source or sink. The flow of messages is orchestrated by an external agent, in this case, an anonymous method.

There are several categories of MIDI messages: Channel, System Exclusive, Meta, etc. In designing a MIDI toolkit, the challenge is to decide how to represent these messages. One approach is to create two or three general MIDI classes and have specific types of MIDI messages represented through the properties of those classes. The Java MIDI API takes this route. Another approach is to create a large collection of finely grained classes to represent all of the different types of MIDI messages. For example, there are many types of Channel messages such as the Note-on, Note-off, Program change, and Pitch Bend messages. The fine grained approach would create a class for each of those message types. My approach was to take a middle ground. I created classes for the general categories of MIDI messages but left the specific types of messages as properties within those classes. This kept the class hierarchy lightweight and manageable while providing enough specialization to make working with MIDI messages easy.

Here is the hierarchy of MIDI message classes in the MIDI toolkit:

IMidiMessage

ShortMessage

ChannelMessage

SysCommonMessage

SysRealtimeMessage

SysExMessage

MetaMessage

Specific types of messages are represented through properties. For example, the ChannelMessage class has a Command property that can be set to represent the various types of Channel messages.

All message classes are immutable. This makes sharing messages throughout an application safe. To create messages, you pass the desired property values to their constructor. Additionally, the toolkit provides a set of builder classes for making message creation more convenient.

The toolkit provides the following message builders:

ChannelMessageBuilder

SysCommonMessageBuilder

KeySignatureBuilder

MetaTextBuilder

SongPositionPointerBuilder

TempoChangeBuilder

TimeSignatureBuilder

The ChannelMessageBuilder and the SysCommonBuilder also use the Flyweight design pattern. When a new message is built, it is stored in a cache. When another message is needed that has the exact same properties as a message that has already been built, the previous message is retrieved rather than creating a new one. When you consider that a typical MIDI sequence is made up of thousands of messages, many of them identical, it is easy to see how the Flyweight pattern is applicable.

Here is an example of creating a ChannelMessage object representing a note-on message:

Often there is a need to process a collection of IMidiMessages. How each message is processed depends on its type. The problem is that you cannot tell an IMidiMessage's type without an explicit check. The IMidiMessage provides a MessageType property just for this purpose. However, having to repeatedly check message types throughout your code can be cumbersome.

The MessageDispatch class is designed to automate these checks. This class acts as a source for every type of MIDI message. It raises an event each time it dispatches a message. The type of event is determined by the type of message it is dispatching.

MIDI playback is driven by ticks that occur periodically. The source of these ticks are MIDI clocks. MIDI clocks come in all shapes and sizes. For example, playback can be driven by an internal or external clock. Also, the way in which the ticks are generated depends on whether the MIDI sequence has pulses per quarter note resolution or SMPTE resolution. For the vast majority of situations, an internal clock generating ticks with pulses per quarter note resolution is all you need.

The IClock interface represents the basic functionality for all MIDI clocks:

The Tick event occurs when a MIDI tick has elapsed. The Started, Continued, and Stopped events are self-explanatory. However, it should be pointed out that when the Started event occurs, sequence playback starts from the beginning of the sequence. When the Continued event occurs, playback starts from the current position. The IsRunning property indicates whether the clock is running.

You may notice that there are no methods in the interface for starting and stopping a clock. That is because with clocks that are driven by an external source, the source is responsible for starting and stopping the clock. The clocks receive messages via MIDI and based on those messages starts or stops generating ticks. Since all MIDI clocks implement IClock, it only represents the functionality common to all the clocks.

At this time, the toolkit provides only one clock class, the MidiInternalClock. This clock generates MIDI ticks internally using pulses per quarter note resolution. For the majority of situations, this clock will work fine.

The MidiInternalClock has a Tempo property for setting the tempo in microseconds per beat. To set the tempo to 120 bpm, for example, you would set the Tempo property to 500000. It can receive meta tempo change messages. When a meta tempo change message is passed to it, it changes its tempo to match the tempo represented by the message.

A MIDI file is made up of several tracks. Each track contains one or more timestamped MIDI messages. The timestamp represents the number of ticks since the last message was played. This timestamp is called delta ticks. The MidiEvent class represents a timestamped MIDI message. It has three public properties:

DeltaTicks

AbsoluteTicks

MidiMessage

The DeltaTicks property represents the number of ticks since the last MidiEvent. In other words, this value represents how long to wait after playing the previous MidiEvent before playing the current MidiEvent. For example, if the DeltaTicks value is 10, we would allow 10 ticks to elapse before playing the MIDI message represented by the current MidiEvent.

The AbsoluteTicks represents the overall position of the MidiEvent. This is the total number of ticks that have elapsed until the current MidiEvent.

The MidiMessage property is the IMidiMessage represented by the MidiEvent.

In addition there are two internal properties, one which points to the previous MidiEvent in the track, and one which points to the next MidiEvent in the track. In other words, the MidiEvent class acts as a node in a doubly linked list of MidiEvents.

The Track class represents a collection of MidiEvents. It is responsible for maintaining a collection of MidiEvents in proper order. MidiEvents are not directly added to a Track. Instead, you add an IMidiMessage, specifying its absolute position in the Track. The Track then creates a MidiEvent to represent the message and inserts it into its collection of MidiEvents.

In addition to providing functionality for adding and removing MIDI events, the Track class also provides several iterators. There is a standard iterator that simply iterates over the MidiEvents one at a time. Another iterator takes a MessageDispatcher object and passes each IMidiMessage to the dispatcher which in turn raises an event specific to the type of message it is dispatching. The value the iterator returns is the absolute ticks for the current MidiEvent.

Perhaps the most useful iterator is the one that when advanced moves forward only one tick at a time. The iterator keeps track of its tick position in the Track. When the tick count has reached a value in which it is time to play the next MidiEvent, it passes the IMidiMessage represented by the MidiEvent to the MessageDispatcher and returns the absolute tick count. This iterator also takes a ChannelChaser object as well as a start position value and "chases" up to the start position before switching to the playback mode. In essence, this iterator allows us to stream the Track in real-time.

The Sequence class represents a collection of Tracks. It also provides functionality for loading and saving MIDI files, so Sequences can load and save themselves.

Every Sequence has a division value. This value represents the resolution of the Sequence and is represented by a property. There are two types of division values: Pulses per quarter note and SMPTE. The Sequence has a SequenceType property indicating the sequence type. Unfortunately, SMPTE sequences aren't supported at this time.

There are several MIDI device classes in the toolkit. Each device class is derived directly or indirectly from the abstract Device class in the Sanford.Multimedia namespace. The InputDevice class represents a MIDI device capable of receiving MIDI messages from an external source, such as a MIDI keyboard controller or synthesizer. The OutputDeviceBase class is an abstract class that serves as the base class for the output device classes. The OutputDevice class represents a MIDI device capable of sending MIDI messages to an external source or your soundcard. And the OutputStream class encapsulates the Windows Multimedia MIDI output stream API. It is capable of playing back timestamped MIDI messages.

There can be more than one of these devices present on your computer. To determine the number of input devices present, for example, you would query the InputDevice's staticDeviceCount property. The output device classes also have this property.

Each MIDI device has its own unique ID. This is simply an integer value representing the device's order in the list of devices available. For example, the first output device on your system would have an ID of 0. The second output device would have an ID of 1, and so on. The same is true for the input devices. When you create a MIDI device, you pass it the ID of the device you wish to use to its constructor. If there was an error in opening the device, an exception is thrown.

To find out the capabilities of a device, you query the class' staticGetDeviceCapabilities method, passing it the device ID of the device you are interested in. This method will return a structure filled with values representing the capabilities of the specified MIDI device.

The InputDevice class represents a MIDI device capable of receiving MIDI messages. It has an event for each of the major MIDI message it can receive. To receive MIDI messages, you connect to one or more of these events. Then you call the StartRecording method. Recording will continue until either StopRecording or Reset is called. The InputDevice lets you set the size of the sysex buffer it uses to receive sysex messages. When the InputDevice has received a complete sysex message, it raises the SysExReceived event.

The OutputStream class is also derived from the OutputDeviceBase class. It encapsulates the Windows multimedia MIDI output stream API. It provides functionality for playing back MIDI messages.

To play MIDI messages, you call StartPlaying. The OutputStream will then begin playing back any MIDI messages in its queue. To place MIDI messages in the queue, you first write one or more MidiEvents using the Write method. After writing the desired number of MidiEvents, you call Flush. This flushes the events to the stream causing it to play them back.

The Sequencer class is back. It's a lightweight class for playing back Sequences. I felt the previous MidiFilePlayer class was not the best means for playing back MIDI sequences. I wanted to give the toolkit the ability to play Sequences your create programmatically. One issue that caused me to shy away from a Sequencer class (after having created one in earlier versions) is the problem of a Sequence changing as it's being played by a Sequencer. I still haven't solved that problem, but I didn't want that issue to prevent easy Sequence playback. So I'm putting in a new version of the Sequencer class with the understanding that it's meant to be used for simple playback. For something more sophisticated, you can use it as the basis for creating something more.

The MIDI toolkit depends on the DelegateQueue class from my Sanford.Threading namespace; the InputDevice and OutputDevice classes use it for queueing MIDI events. In turn, the Sanford.Threading namespace depends on my Sanford.Collection namespace, so that assembly is also necessary for the toolkit to compile. Finally, the toolkit uses the Sanford.Multimedia namespace. I've provided all of the assemblies with the download. I've linked the projects that use them to the assemblies in hopes that the toolkit will compile out of the box. Hopefully, the days of having trouble compiling my projects because of not having the right assemblies are over.

This article has provided an overview of my .NET MIDI toolkit. My hope is that it will be a useful and powerful tool for writing MIDI applications. It has been a lot of fun to write. Each version has represented the very best of my skill and knowledge as a programmer. I welcome feedback and any bug reports you may have. Take care and thanks for your time.

License

Share

About the Author

Aside from dabbling in BASIC on his old Atari 1040ST years ago, Leslie's programming experience didn't really begin until he discovered the Internet in the late 90s. There he found a treasure trove of information about two of his favorite interests: MIDI and sound synthesis.

After spending a good deal of time calculating formulas he found on the Internet for creating new sounds by hand, he decided that an easier way would be to program the computer to do the work for him. This led him to learn C. He discovered that beyond using programming as a tool for synthesizing sound, he loved programming in and of itself.

Eventually he taught himself C++ and C#, and along the way he immersed himself in the ideas of object oriented programming. Like many of us, he gotten bitten by the design patterns bug and a copy of GOF is never far from his hands.

Now his primary interest is in creating a complete MIDI toolkit using the C# language. He hopes to create something that will become an indispensable tool for those wanting to write MIDI applications for the .NET framework.

Besides programming, his other interests are photography and playing his Les Paul guitars.

Hi Leslie, I am a full time report and data processing programmer for a company here in NJ. I need like a push start. Something with two or three controls on it... maybe one just starts a bass drum with a drop down tempo selector and a button that just hits a note on a MIDI instrument selected from another dropdown. Just to get me started. I would be very happy to pay / donate like $100 bucks or so if that is appropriate. just asking... Thanks. Tim.

Unfortunately, it's been a few years since I've opened Visual Studio and done any C# programming. I literally haven't looked at the code in my toolkit in years. So it would take a mental reboot to get myself back into it, and unfortunately I don't have the time.

But maybe someone reading your post and who is actively using the toolkit could lend a hand?

I totally understand... and thanks anyway...
So... Perhaps there is another genius out there who can do this?
What I really want is a stand alone C# project that can do the above.
Stand alone in so far as I can look at what you put in each button and trace it back (not to far) to the code so I can start learning this stuff and do some code myself.
I admit this my be over my head but i really think if I could get a running start I could take it from there. Thanks guys

I have just noticed that midi files can be embedded with chord symbols using the Yamaha XF format and I am writing to ask if it will be possible for me to edit the midi toolkit source (I have Visual Studio 2010 on a windows 7 machine)to include this type of meta data in my midi tracks produced by your toolkit.
I would also like to thank you for the toolkit...I have found it very useful over the past years and have been able to develop my own midi software using the toolkit. I have used Rich Text Edit controls in a windows forms project to enter melodies in text format using a midi keyboard as input. Midi pitch is stored as a superscript offset and ticks are stored as colour codes. The rhythm is tapped in later using the INS key on my computer keyboard with a metronome. This means that notes can be recorded first without worrying about the rhythm and when the rhythm is added the correct pitches are already in place. This makes life easier for folk such as me who find it difficult to record in real time and get a decent midi lead sheet. It is also very easy to edit the notes and rhythm, something which is not always straightforward in sequencer software. I can easily add lyrics and embed them in the midi file using your toolkit but if chord symbols could be added as well it would be good.

Hi everybody,
I have this problem: I need to play notes of certain duration, better in seconds, but I don't know if it is possible and how can I perform this task.
I use a NoteOn message: is it possible to assign a duration with this message?

A duration is nothing else than the time between on and off messages.
So you don't qualify a note on with a duration, but rather the note off with the delta time for it to strike after the corresponding note on's been sent.

You just have to calculate this delta time with your tick duration:
If tempo is 120BPM then one quarter note is half a second,
If you need 1 whole second, duration is then twice a quarter note.

If your resolution (PPQ = Pulses Per Quarter) is 96, then you may say the delta ticks needed between on and off messages should be twice this value, then 192.

Is there an easy way to relay raw data from a MIDI input to a MIDI output? At first glance, it would appear that I have to interpret the incoming MIDI stream and then send it out using the specific SysEx, noteon, noteoff, etc. messages. I kinda just want to take the data coming in and send it verbatim to the output...essentially working with just a raw stream.

Answering my own question. I hacked at OutputDeviceBase.cs and MidiHeaderBuilder.cs. Using the SystemExclusive message as a template, I was able to fairly easily create a new overload that worked off of a byte[] . If you add these two functions, it is then easy to take data captured from a serial port, udp, tcp, stream, file, or whatever and just send it verbatim out the port.

Great toolkit! I'm looking forward to using it in an algorithmic composition project of mine.

However I found some bugs with OutputStream, so you might want to fix the code?

OutputStream.Close:

if(!IsDisposed)

should be:

if(IsDisposed)

Unless this is fixed, closing the stream does nothing, (and the program will not usually exit until the stream is closed).

MidiHeaderBuilder.Build:

header.bytesRecorded = 0;

should be:

header.bytesRecorded = BufferLength;

Unless this is fixed, Windows thinks that there is no data to play and ignores the buffer. No MIDI events will play and the attempt to stream results in MOM_DONE immediately.

Also, I found it was necessary to add a Done event to OutputStream (following the pattern you used for NoOpOccurred) so that client code can know when to close the stream or send it more data or whatever.

Thanks for the great toolkit. I know you're not active with it any more, but I hope you can add in these simple fixes anyway so that nobody else will have to spend the time I did to figure out why it wasn't working.

I always receive an "Invalid pulses per quarter note value" exception whenever trying to play a MIDI file generate by an application I wrote. I've set ticks per quarter note to 24, in the MIDI file header. Any idea why I receive this exception?

The problem is that the value that you get has to be dividable by our PpqnMinValue so if we for example have 1024 it's not dividable by the value 24. 1024 / 24 = 42,66~.
So we divide our value with the PpqnMinValue, round it up or down and multiple it again with 24.
1024 / 24 = 42,66 -> Math.Round -> 43 * 24 = 1032. And there we have a valid value.

I have been using your great toolkit for some time. I am still using V4 since it is working well for me and I have a lot of code based on it. Anyway, I have a customer who tried running on a 64 bit W7 machine and as I'm sure you're not surprised it had problems. I saw you post about the pointers and it doesn't look like there are that many in the code. Are there any other changes that you know of to make this go? I appreciate all your work and look forward to hearing from you.

Unfortunately, I haven't worked on the lib in several years. This is just a guess, but does .NET have an Int64 pointer? If so, maybe the Int32s need to be switched over. Better yet, is there a .NET pointer type that ports to the correct size regardless of platform?

Sorry I can't be of more help. I've been out of the .NET/C# world for a long time now.

was there any progress on this? i've made a public repo on github and changed all int handles to IntPtr which is correct for each platform. so if you have any more suggestions please add them on github: