I play with dsPICs, and I use DMA for several things. I have not used it for a UART, but a UART should not be a difficult case.

What I have done in the development stage of a synth, is to write boring little "hello-world" projects that prove I understand the concept of using the peripheral. Once done, I have setup code and usually driver code that I can copy into the synth project.

As for MIDI, my main project is a 3 dsPIC synth called "Harpie" - an 8 string karplus-strong harp/drum synth. In this case, I used 2 dsPICs for voice engines each supporting 4 strings. The third dsPIC is used for a MIDI controller and voice assigner. I did this because it relieved the voice engines from having to do this logic and dedicates their power to sound production. The commands from the MIDI contoller and voice assigner are sent over SPI to the voice engines. In this case, the SPI is DMA enabled, but the UART DMA is not. The SPI message structure format and size is fixed. There is a simple ISR that responds to the UART interrupt, stores the newly arrived byte in RAM and sets a flag for the main loop to detect, react to and clear. The MIDI controller code monitors a "completed message" flag and reacts when a complete message is received by constructing the SPI message and sending it via DMA. DMA is also used for receiving the SPI messages in the voice engines.

I would argue that it may not be all that much more efficient to use DMA on the UART for a MIDI controller. For one thing, data comes in really slowly at 31.25 kBaud. And each transfer needs an interrupt anyway because MIDI can come in messages that are 1, 2, or 3 bytes for performance and many more for system exclusive messages. Because there is no solidly fixed data format for MIDI messages (such as ALL MESSAGES ARE 3 BYTES), it takes computation with each incoming byte to deal with the data, you can't just wait for a certain number of bytes for each message. Consider running status as well, since that throws in variable message size into the mix as well. On top of that, you really want the synth to react as soon as possible after the receipt of a complete message. IMO, a circular buffer is necessary only if you don't have CPU cycles to dedicate to MIDI controller logic on an on-demand basis. I don't like the extra latency, so I would arrange the synth and controller code to allow for this computation time. This can be tricky, especially in C (I use ASM) where it's not easy to know how many cycles a given statement may consume. Again - my opinion - but I think MIDI data comes in slow enough and a dsPIC is fast enough that you shouldn't need a circular buffer, even for system exclusive messages. In my synth there is a 3 byte buffer that is used to store the incoming bytes until a complete message is found - then it reacts immediately.

I plan to do a synth soon that will use a single dsPIC with a MIDI controller inside it. I do not intend to use DMA for the UART due to what I wrote above._________________FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.Fruit flies when you're having fun.BTW, Do these genes make my ass look fat?corruptio optimi pessima

Thanks you very much for the reply! I have come across your Harpie synth before, since there's not too many dspic synths around the interwebs! It sounds awesome...

I agree it's not entirely necessary and you make some good suggestions. I'll try to explain why I'm doing it..

I'm reaching the CPU limit and need to optimise it to hopefully fit in a third oscillator!

Currently, each audio sample is calculated in the DAC interrupt one by one. If I try to calculate samples in the main loop, it's nowhere near quick enough to keep up with the DAC. I have been told I should be using DMA to transfer a buffer of samples calculated in main to the DAC. I really want to optimise this as much as possible, so I'd like to try this, and I thought it would be easier to start with the UART instead of the DAC....

I should also say there are as many ways to do this basic task as there are developers doing it.

I would start with just an ISR for the UART and see how it goes.

This project will require multitasking with 2 main tasks: 1) Synth Voice and 2) MIDI controller. What will make or break the project is how that multitasking is implemented and how much time (percentage of CPU) is dedicated to MIDI and how much is dedicated to voicing. If not enough is dedicated to voicing, your synth will suffer late or missing output samples which will reduce the quality of the output. Starve the MIDI controller and you add latency (and MIDI is already bad enough).

Bottom line though is that simplicity makes coding and maintenance easier. If there is a reasonable amount of time left over after a sample is computed, that time can be devoted to MIDI logic. DMA and how it is used becomes more important as the left over time shrinks.

It may actually make more sense to rely simply on an ISR for MIDI UART and use DMA for the DAC and ADC (if ADC is used). The thing that keeps me away from using DMA for a MIDI UART is that you really want to know when each byte arrives because of message size variability. Because of the as previously mentioned, an interrupt is needed for each incoming byte anyway. If the code doesn't allow nesting of interrupts, then the shadow registers become a key point in efficiency because the interrupt won't necessarily cause stacking.

To answer one of your questions, if the ring buffer is being filled by the UART DMA controller, then you'd want the ring buffer to be in DMA RAM so that you don't have to move the data from DMA RAM to the ring buffer RAM._________________FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.Fruit flies when you're having fun.BTW, Do these genes make my ass look fat?corruptio optimi pessima

I'm reaching the CPU limit and need to optimise it to hopefully fit in a third oscillator!

Currently, each audio sample is calculated in the DAC interrupt one by one. If I try to calculate samples in the main loop, it's nowhere near quick enough to keep up with the DAC. I have been told I should be using DMA to transfer a buffer of samples calculated in main to the DAC. I really want to optimise this as much as possible, so I'd like to try this, and I thought it would be easier to start with the UART instead of the DAC....

Hope that problem makes sense. Any advice there?

Many thanks, Matt

Ah, ok, you're at the CPU limit...

The only advice there would be to write test code to do some timing to find out whether DMA for a UART is truly advantageous. Test with DMA, test without - faster wins.

And I too have been "told" that the way to do these things is to use the DAC in DMA mode and load up the FIFO. For me, this makes things more complex than I like and in the case of Harpie, there were so few cycles left over at the end of a sample calculation that it wouldn't have improved anything to use DMA for the DAC.

You don't say if you're using C or ASM.

If C:
I'm not sure if it's available but the keyword "inline" for defining functions might help to prevent function call overhead (stacking and unstacking). You want your sample calculation code to execute as one long linear sequence of code (after all - you have lots of Flash).

If ASM:
Same as for C, I don't use the CALL instruction at all and as few jump/branch instructions as possible. The code is put together using macros instead of CALL instructions. This allows easier maintenance and it's as fast as it gets.

A quick check of google on "C30 inline keyword" says that it is available.

If you have lots of function calls - then that is where the CPU is being wasted and inline could drastically improve the situation. It will make the output hex code larger, but if it fits with all of your functions "inline", then you might just get that extra oscillator.

One more thing - there is a "student" version of the C30 compiler which is free and a purchasable version. The one you buy has all optimization features enabled. The student version is somewhat optimization crippled. I've been informed that the inline keyword is something that could/might be ignored by the compiler - that is, the program will still work without it.

Especially if you use the free version, you'll want to know if "inline" is actually helping you. To find out, you can either execute timing tests for speed or you can look at the generated assembly code to see if the function calls are occurring with CALL instructions or if the compiler has built truly inline code by inserting the function code where needed._________________FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.Fruit flies when you're having fun.BTW, Do these genes make my ass look fat?corruptio optimi pessima

OR you could use the "messy way" and place all of the code into the main function and remove the function calls. That would mean repeating the code for functions used multiple times, but it eliminates the stacking overhead._________________FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.Fruit flies when you're having fun.BTW, Do these genes make my ass look fat?corruptio optimi pessima

Just curious, but have you tried invoking the compiler with the "-finline" option? You can add that in the shortcut. You never know - it might work..._________________FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.Fruit flies when you're having fun.BTW, Do these genes make my ass look fat?corruptio optimi pessima

It's possible that the fact you use the LITE version disables that, that it ignores the setting.

This whole thing is why I decided to use ASM instead of C (and I am a veteran C programmer who likes C).

In ASM - you are the one in TOTAL control. There is nothing left to "compiler optimization".

It's a different world, but it is well worth it IMO.

I agree with you that the chip is powerful enough to do what you want. It's just a matter of telling it what to do. The C30-lite C compiler is meant for students doing low intensity projects for class. It works well for that. But to get the most out of a (any) chip I think ASM is the best approach - because you are talking to the machine in the machine's language.

The dsPIC is a low-end DSP device, it's only 16 bits and isn't really that fast (many serious DSP devices run in the hundreds of megahertz). So the use of a C compiler with poor optimization isn't really ideal. With ASM you can do personal human optimizations that might never happen in a C compiler.

For the Harpie, I was able to get 4 voices out of each of the 2 voice engines by careful optimization of my own ASM code. 4 karplus-strong voices (oversampled by 4x including a single pole IIR filter per voice) should give a fair idea of what the dsPIC33F is capable of at 40 MIPS. A possible equivalent would be 16 oscillators with 16 filters each (though I will never admit to having tested that). Each Karplus-Strong voice is more than just an oscillator. I'd publish the code, but I'm in the process of attempting to market it._________________FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.Fruit flies when you're having fun.BTW, Do these genes make my ass look fat?corruptio optimi pessima

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum

Please support our site. If you click through and buy from our affiliate partners, we earn a small commission.